A key question that emerges in the philosophy of technology is whether technological artifacts can embody values. It is a truism at this point that technology is value-laden (van den Hoven and Weckert 2008), that is, technology can in some sense be causally efficacious in the kinds of things we come to value (i.e., as means to our ends, as having instrumental value). A far more pertinent question, however, concerns the status of these artifacts themselves: is it possible for these technological artifacts to embody values (Johnson and Noorman 2014; van de Poel and Kroes 2014; Klenk 2020)? Can artifacts, independently of their use, be said to have value? This is one of the more controversial questions in philosophy of technology, and it is the question I will concern myself with in this paper. “Value”, however, is a diverse concept, with many competing accounts of what exactly it is, and, moreover, what kinds of value we might be talking about (epistemic, moral, etc.). In this paper I will be concerned with moral values specifically, and whether it might be possible to embed such values into technological artifacts.
Consider the case of the American National Rifle Association (NRA), whose opponents advocate against the proliferation of firearms and claim that “Guns kill people”. The popular retort from the NRA, captured in their slogan, is that “Guns don’t kill people, people kill people.” Implicit in this response is the neutrality thesis regarding technology: the gun itself does not carry any value and is only instrumentally valuable. Its value is determined by its use by human beings, and this type of response denies that the technology itself embodies any values (Peterson and Spahn 2011). Implicit in the first slogan (“Guns kill”) is the view that the material components of the gun are irreducible to the social qualities associated with the user-of-the-gun (Latour 1999: 176). Some material components of the gun, therefore, can come to embody values independently of the qualities of the user. In this way an ordinary citizen, by virtue of using a gun, can become a threat to society and themselves. The second slogan (“Guns don’t kill, people kill”), however, seems to suggest that it is not the material components of the gun (its design, or whatever) that make it dangerous. The gun is simply a neutral carrier of intentions, and those intentions naturally flow from the person who is using the gun. If the user-of-the-gun is a good person, the gun will be used with discretion and in morally appropriate ways. Conversely, if the user is insane or morally bankrupt, the gun will be used in morally reprehensible ways: all this, without any change in the constitution of the gun itself. Latour considers the first slogan to involve a sociological interpretation of artifacts, and the second to offer us a material interpretation thereof (Latour 1999: 177). The “material” interpretation, following Latour, “make[s] the intriguing suggestion that our qualities as subjects, our competences, our personalities, depend on what we hold in our hands” (Latour 1999: 177). The “sociological” interpretation, in contrast, moralizes the situation. Here it is worth quoting Latour at length:
“For the NRA, one’s moral state is a Platonic essence: one is born either a good citizen or a criminal. Period. As such, the NRA account is moralist-what matters is what you are, not what you have. The sole contribution of the gun is to speed the act. Killing by fists or knives is simply slower, dirtier, messier. With a gun, one kills better, but at no point does the gun modify one’s goal” (Latour 1999: 177).
The suggestion here (from the NRA at least) is that if we can learn to simply be better persons, then we do not have to worry about the moral effects of artifacts. If we are trained, for example, to uphold better gun safety standards, etc. then we would have done all we can. The above characterization between “material” and “sociological” interpretations is of course a rough caricature of the actual positions held and defended by various philosophers of technology. For example, nobody would claim that the gun makes no contribution to the killing, and nobody would claim that the gun is wholly responsible either. Those who oppose the proliferation of guns merely assert that these artifacts can affect those who make use of them. Conversely, gun control opponents merely claim that guns are but one efficient way of carrying out an act, with other things also capable of performing the same task (Latour 1999: 176; Verbeek 2005: 155). This caricature, however, serves the purpose of introducing the topic of value-embedded in technology. In what follows I will briefly introduce and then critique the so called “neutrality thesis” regarding technological artifacts (Illies and Meijers 2009; Peterson and Spahn 2011).
The neutrality thesis
The Neutrality Thesis states that the various technological artifacts are merely neutral means with which agents achieve their ends (Illies and Meijers 2009: 421). This view has little support in this crude formulation due to the society-wide effects that technological artifacts have. Let us call this the Strong Neutrality Thesis (SNT). A more sophisticated version of the value neutrality of technology is due to Peterson and Spahn (2011). Here, the authors show how it is implausible that technology never affects the moral evaluation of action (2011: 423). They call this view the weak neutrality thesis (WNT). To make their point salient, they use the example of a terrorist.
“who intends to kill ten million people in a big city by blowing up a small atomic bomb hidden in a suitcase. Compare the possible world in which the terrorist presses the red button on his suitcase and the bomb goes off, with the possible world in which he presses the red button on the suitcase but in which nothing happens because there was actually no bomb hidden in the suitcase. In the first example ten million people die, but in the second no one is hurt” (Peterson and Spahn (2011: 423).
In the example above, in the first case, the action of pushing the button is morally wrong. This, however, is not necessarily true of the second case. The point is that the mere presence of the bomb in the suitcase changes the moral evaluation of the action (Peterson and Spahn 2011: 423, my emphasis). In the case where millions die, we are outraged and might demand reparations. In the case where nobody dies, we might be outraged but it would make little sense to seek reparations. Thus the moral valence of the action changes, without necessarily changing the fact that in both cases an immoral act was committed. At the very least, therefore, technology can come to influence consequences, and our moral evaluation of those consequences. But can technology come to influence what we value?
Artifacts influencing value
Consider a seemingly trivial example, borrowed from Verbeek (2005: 5) of microwave ovens. Initially the microwave, as a novel technology, was targeted primarily at men. It was marketed as technologically sophisticated device and appeared alongside video recorders in stores. Once this market became saturated, however, the microwave was marketed more as an ordinary cooking device, and started appearing alongside refrigerators and ovens (Verbeek 2005: 5). There was.
“a gender divide whereby ‘brown goods’ such as televisions, video and hi-fi were seen as high-tech and male-oriented by the company engineers, marketers and retailers, while ‘white goods’ such as refrigerators, dishwashers and clothes washing machines were seen as low-tech and female oriented” (Henry and Powell 2017: 35).
Early designs of the microwave positioned it as a stereotypically ‘brown good’, appealing to single men who did not have wives at home to prepare their meals for them in advance (Cockburn 1997). However, after failing to sell, retailers reconsidered their options and decided to label the microwave as a ‘white good’ and market it to woman. This involved, among other things, a change in colour scheme (from dark to light) (Henry and Powell 2017: 36). Moreover, the microwave made possible a new kind of meal: the frozen meal for one, which can be quickly prepared with minimal fuss. Before the microwave, there existed few options for quickly preparing frozen meals, but with this new technology it became easy. This ease made dining alone a far more convenient event than it was before. In this way, the microwave can be said to have altered the possible ways we can take meals. Subsequently, this change in our available action scheme makes us value certain actions more (eating alone) than would have been possible without the technological artifact being present (Illies and Meijers 2009: 422).
“Thus technologies are not understood as neutral (a mere addition to a pre-given social system), or determinative (directly causal of changes in a social system) but as an embedded and co-constituting feature of society and its structures, cultures and practices” (Henry and Powell 2017: 36–37).
In this sense, technological artifacts are not simple “intermediaries”, but rather mediators, in the relation between humans and the world (Verbeek 2005: 114). They change how the world appears to us and our possible interactions with it. In this way, technology, broadly construed, can come to influence what we value, and increase the likelihood of certain states of affairs coming about. In what follows I will outline how technological artifacts can influence moral values.
Artifacts influencing moral values
Let us start with an examination of “Killer robots”—weapon systems capable of performing lethal military operations that were once the domain of human beings. An example of this type of system is the “Predator”Footnote 1 drone, an unpiloted combat aerial vehicle capable of remotely performing military operations such as air-to-ground missile launches (Sparrow 2007: 63; Royakkers and van Est 2015: 560).Talk of drone technology has recently become part of our common lexicon, with former US president Barrack Obama’s controversial use of drones to wage war in Iraq being a key trigger point for this debate. Moreover, the addition of Distinguished Warfare Medals for drone operators has also drawn the public’s attention. Such awards can outrank combat medals awarded to US troops, and the public’s uncertainty as to whether drone pilots deserve to be acknowledged in this way is suggestive of the lack of consensus with regards to done warfare and its place in the military (Sparrow 2015: 380).
Consider an example from the Kosovo war, in which NATO aircraft were forced to fly above 15,000 feet to avoid enemy fire. In this case, any bombs deployed would have had to be dropped from this height. In one instance, this tragically resulted in NATO aircraft mistaking a convoy of busses transporting refugees for Serbian tanks, and subsequently bombing them (Royakkers and van Est 2015: 560). In such a situation, an unpiloted drone would be preferred, as it could fly at a lower altitude, taking greater care in target selection and the subsequent use of lethal force. Such drones also reduce the need for human lives to be put in danger in military operations, creating a new class of ‘cubicle warriors’ (ibid.: 560). They also may be cheaper than human soldiers in the long run (a military drone does not need a pension scheme or a hospital plan), and outperform human soldiers in specific domains (human soldiers tend to require sleep to function optimally) (Müller 2014: 4). There is, therefore, a strong prima facie case for driving the project to create ever more complex drone technology, and this is indeed reflected in the US government having funded research into the construction of autonomous robots since the early 2000’s via the Defence Advanced Research Projects Agency (DARPA) (Wallach and Allen 2009: 49).Footnote 2 One could even argue that it would be morally impermissible to place a soldier in a life-threatening situation if that same task could be carried out by a military robot, in which case the use of such robots could be ethically defensible, and even encouraged.
Armed with this understanding of military drones more generally, we can consider a situation in which drones take lethal action and civilian casualties are incurred. This is not mere speculation: it is estimated that since 2004 between 769 and 1725 civilians have been killed in drone strikes in Pakistan, Yemen, Somalia and Afghanistan (Drone Warfare 2019). Moreover, drones are not infallible, and we can foresee a scenario in which a decision is made to launch a strike, but the target is misidentified (as in the Kosovo example above) (Tollon 2019: 20). In such cases it is still human beings who are pulling the trigger, albeit from a distance. Therefore, when evaluating such civilian deaths, we should exclusively look towards the human beings that can be held morally responsible for these deaths, since holding the drone responsible would be conceptually inappropriate. This is generally because (i) human operators are taken to be ultimately responsible for the actions of such drones, and (ii) because moral responsibility is taken to entail punishment, and drones cannot be punished (e.g. Sparrow 2007: 74). However, notwithstanding the fact that human operators are held morally responsible, it is clear the use of such drones makes the act of killing far easier.
The history of military technology is such that at each new stage of development we get better at killing from a distance: from swords to SWORDS (a remotely operated machine gun which makes use of the Special Weapons Observation Remote Direct-action System) (Wallach and Allen 2009: 20). Killing from a distance gets around two of the most common barriers to an effective war machine: Firstly, soldiers’ fear of being killed, and secondly, their resistance to killing others. The fact that machines currently lack the capacity for affect is seen as an improvement on human soldiers, as it means they (machines) would not have these affective limitations.
In the example above, therefore, it is possible to discern a distinct change in moral values: in the classic case, soldiers are trained to engage with combatants and non-combatants in warfare. This is predicated on the fact that soldiers will in fact find themselves in situations where they will have to make decisions on the fly, without perfect information, while simultaneously being in the theatre of war itself, and, therefore, factoring in to their decisions the potential consequences of their actions for their own lives. A courageous action, in such a scenario, might be risking one’s life to save another, as courage involves a personal sacrifice to do what is right. Thus (and this is but one example) the virtue of being courageous in this sense is valued, and indeed deemed morally commendable. By contrast, remotely operated drones outsource many of the affective components of warfare, meaning that decisions can be made outside the context of the theatre of war itself. Specifically, drone operators need not be concerned with whether they will live or die when performing a given military operation, and so will not factor this into the decisions they make, as there is no personal sacrifice to be made.Footnote 3 Here the distinction between moral and physical courage becomes paramount. Physical courage refers to the capacity to face bodily injury (or death), while moral courage refers to the capacity to make difficult moral decisions (Sparrow 2015: 383). On the surface, it seems as though drone operators may not exercise physical courage due to their being geographically separated from the theatre of war. However, it seems plausible that they could cultivate moral courage, as they could of course refuse to follow an instruction to kill should they deem it problematic on moral grounds, despite whatever institutional pressure there may be to follow such a command. However, and this is crucial, in the case of military personnel who find themselves “on the ground”, moral and physical courage go hand in hand. It is by virtue of their proximity to conflict that such soldiers are said to act courageously, literally risking their lives for what they believe to be right. Their physical courage, in a sense, gives rise to moral courage.
This is not to say, however, that drone operators are therefore incapable of moral courage. It seems right to me that such persons can and do exercise the capacity of moral courage when they refuse orders that may be illegal or immoral. However, to my mind, the absence of physical risk matters significantly.Footnote 4 And it is this that constitutes a change in how we think about military ethics more generally: in the past, courage (at least in the military sense) was understood to involve both physical and moral criteria, with the two being joined at the hip. Now, however, it is possible to discern a change whereby the one can be decoupled from the other. I leave it open as to what the exact relationship between moral and physical courage may be. My point is simply that our usage of such teleoperated weapons has forced us to consider a change a change in what constitutes the moral value of “courage”, at least in military settings.
Intentionally designed features as embodying value
What I have shown above is that technological artifacts can influence what we come to value. Moreover, I showed how these artifacts can also change what comes to constitute a given moral value. In what follows, however, I would like to explore whether such artifacts can have value independently of their use. That is, can technological artifacts be “good” or “bad” by virtue of their designed properties alone? Should this question be answered in the affirmative, it would mean a significant burden would be placed on those who design such systems. There would need to be serious ethical considerations and extensive consultations around the intended and unintended consequences of specific design choices. Moreover, it would mean aligning the values of our technological systems with the values we aim for as a society (How 2017; Taddeo and Floridi 2018; Floridi et al. 2020). I will show that we should not focus exclusively on the designed properties of artifacts. First, this kind of approach does not allow values to change, and, second, it encounters the difficulty of figuring out what exactly designer intentions may be. From this I will introduce an affordance account of technological artifacts, which aims to shed light on how technological artifacts afford certain uses, and in this way, independently of their actual use, can encourage or discourage certain actions.
A good place to start for such a design focussed account is provided by van de Poel and Kroes (2014), where the authors claim that value sensitive design (VSD) can lead to artifacts capable of embodying value (2014: 112). This account turns on technical artifacts being intentionally designed to have certain features, and that, in some cases at least, these features can result in technology embodying value (2014: 112). I will outline and then critique their argument.
The intentional account
Van de Poel and Kroes make use of two contrasting examples to underscore their thesis: sea dykes and knives. Sea dykes, as flood protection embankments, serve the function of protecting low-lying land near the sea from flooding. As the authors note, the point is not that sea dykes are instrumentally valuable (i.e., that they can be used as effective vehicles for safety), but rather that safety is an integral part of their function (i.e., safety, as a design specification, is part of their makeup) (Van de Poel and Kroes 2014: 114). Contrast this with a kitchen knife: the function of such a knife is to cut things. Such cuttings may be instrumentally valuable, for example, for the maintenance of good health or well-being, etc. However, and significantly, the realisation of these final values is not part of the function of knives nor are these values to be found in the design specification of knives in general (Van de Poel and Kroes 2014: 114). In other words, in the case of the knife, its function and the final values that can be achieved via this function can be separated. This is not the case in the sea dyke example, as their instrumental purpose (prevention of flooding) is necessarily tethered to their final value, the value for which they are intentionally designed (safety from flooding) (Van de Poel and Kroes 2014: 114).
Based on this discussion, the authors go on to claim that:
“the embodiment of extrinsic final values in technical artifacts thus depends on both an intentional condition (‘x has been designed for G’) and on a condition that primarily refers to physical properties (‘The designed properties of x have the potential to achieve or contribute to G (under the appropriate conditions’)” (Van de Poel and Kroes 2014: 118).
Thus, their account hinges importantly on the designed properties of the artifact in question, as these artifacts can only be said to properly embody value if they have been intentionally designed as such. However, just because an artifact has been intentionally designed to embody a specific value, does not mean that it will always realise that value, in practice (van de Poel and Kroes 2014: 119). In this way, there is a crucial difference between the intended value (that which designers aim to embody), embodied value, and the realised value of a technical artifact. The embodied value is that which is intentionally designed, whereas the realised value is how this value comes about in practice or use (van de Poel and Kroes 2014: 119). The context in which a technical artifact is embedded, therefore, plays a crucial role in co-determining whether the intended or embedded value is indeed realised (van de Poel and Kroes 2014: 119).
In other words, intended design underdetermines the value that an artifact may come to be embedded with. There are cases where the specific use of a technology in different situations leads to the realisation of different values (van de Poel and Kroes 2014: 120). In addition to this, the authors also point out that VSD is only the first step in the process of creating an artifact that properly embodies a relevant value. This implies that designers have an obligation to not just consider their design intentions, but also the potential contexts in which the device will be used, anticipating the potential for multiple realisability of values in practice.
Problems with the value sensitive design account
While the argument presented by van de Poel and Kroes is significant for the way in which it makes salient how technologies can embody values, I will argue below that this accout still has some shortcomings. Specifically, I will follow Klenk (2020), who argues against van de Poel and Kroes by showing that their acocunt has both metaphysical and epistemic issues. From this he introduces the concept of an afforance, borrowed from ecological psychology, into discussions surrounding value embedding in philosophy of technology.
Metaphysical issues
The first issue that Klenk raises is metaphysical, and pertains to the intended use versus the designed use of an artifact (2020: 5). According to Klenk, IHAVEFootnote 5 creates a disjuncture between actual use and the question of whether an artifact embodies a value (2020: 5). This suggests that while the designers of artifacts are the source of value for the various technical artifacts, it is not necessary for them to also sustain those values in practice. An implication of this is that how an artifact comes to be used is not a requirement when considering what value it embodies. While van de Poel and Kroes do acknowledge that designers must consider the potential uses of the artifact, this consideration is only applicable insofar as it features in the design phase (2014: 120). While the VNT claimed that an artifacts value is only to be found in its use, van de Poel and Kroes seem to be claiming that use has no bearing whatsover on value (2014). In such a scenario, we would always have to look at the designed intentions of an artifact to determine its value. It is here that the metaphysical issue rears its head: if the value of an artificat is “fixed” at its origin, then it does not seem possible that the embodied value of an artifact can change over time (Klenk 2020: 5).
An implication of this is that should we want to claim that the value of an artifact has to change, we would then need to claim that the designed intentions also changed. This is of course impossible: we cannot go back in time and change the intentional history associated with a particular technical artifact (Klenk 2020: 6). The only kind of value change that is possible on this account is elimination of value completely. Once an artifact stops contributing to the relevant designed value, it ceases to have any value whatsoever. There are, however, cases of appropriation, where the embedded value of the technology is shown to be subject to change, without any change in the artifacts intentional history (Klenk 2020: 6). We, therefore, have both metaphysical and practical grounds for questioning the tenability of the intentional history account.
Moreover, there is the issue of how designer intentions are supposed to feature in technology itself. If it is designer intentions that really matter, then what is the use of claiming that technology, embodies value? If we ought to look toward designer intentions, then it seems that any value that we would find in technology would simply be a derivative of those which the designers had in mind. It, therefore, makes little sense to speak of technology embodying values at all, as the values seem to be in the heads of the designers. This leads to certain epistemic issues.
Epistemic issues
To see the epistemic issues with IHAVE, once again consider the determining role that designed intentions play in the value an artifact comes to embody. To fix the value of a given artifact, therefore, it should be possible to have reliable access to those designed intentions, to ensure that our judgment is epistemically sound. There are two possible ways in which these intentions can be uncovered: directly or indirectly (Klenk 2020: 7). Directly observing intentions is impossible,Footnote 6 and the best we can hope for in this regard is an accurate inference. At best, this inference gives us indirect access to intentions.
Indirect access can be obtained in a number of ways. First, one could look at the observable features of the artifact in question, and reverse engineer what the design intentions may have been. However, since design intentions underdetermine design choices, this route seems fraught with difficulty (Klenk 2020: 2). Second, designers often make their intentions clear, either verbally or through explicit documentation of the design process. In such cases, we seem to have a reliable way to track design intentions, as these documents are in some cases publicly accessible (or can at least be uncovered upon request). These documents can illuminate the designed intentions and how they relate to the physical properties of the artifact. Klenk, however, points out that we have situations in which the same artifact has two different intentional histories associated with it (2020: 8). This is clearest in cases of replication, and specifically replication with the intention for novel usage. It is possible to imagine an engineer, E, who designs a specific artifact A, recording along the way their designed intentions. Now, another engineer, E* comes across A, but intends to use it for very different purposes. E* records their design intentions for product A*, which are substantially different from those of E. However, the physical properties of A and A* are identical, with different intentional histories. In such a scenario, we would have to decide which intentions matter most, and only then would we be able to determine which values the physically identical artifacts have. IHAVE, however, does not provide us with certainty as to which intentions “count”, creating epistemic uncertainty (Klenk 2020: 8). Following from these difficulties with IHAVE, Klenk argues that we instead investigate an affordance account of value embedding in artifacts.