In the past, the term ‘agency’ referred to the capacity to act; action was distinguished from mere behavior, as in the case of the behavior of artifacts; and only those entities with the capacity to act were called ‘agents’. Agency was presumed to apply exclusively to humans whose acts were seen as resulting from intentions. More recently, however, this notion of agency has been contested. AI researchers have adopted the idea of “artificial agents” (Jennings et al. 1998; Ferber 1999; Weiss 1999; Wooldridge 2002; Floridi and Sanders 2004; Floridi 2008). In AI, the operations of a program or a robot are seen as actions, which means that programs and robots count as agents. They are ‘artificial’ in that their actions are computational and embodied inside electronic circuits, as opposed to the ‘natural’ actions performed by humans. What, then, are the implications of saying that AI artifacts are (artificial) agents?
The current discourse outside AI draws on multiple notions of agency. In the Stanford Encyclopedia of Philosophy, for example, Schlosser distinguishes a broad and narrow meaning of agency:
In a very broad sense, agency is virtually everywhere. Whenever entities enter into causal relationships, they can be said to act on each other and interact with each other, bringing about changes in each other. In this very broad sense, it is possible to identify agents and agency, and patients and patiency, virtually everywhere. Usually, though, the term ‘agency’ is used in a much narrower sense to denote the performance of intentional actions. (Schlosser 2015)
Here Schlosser generously expands the notion of agency to include entities that are causally efficacious. Entities that act with intentions are causally efficacious, though the causal chain that explains their actions initiates from intentions. Later we will discuss claims made by AI researchers as to the intentions of artificial agents.
Literature on agency now spans a number of disciplines including Science, Technology and Society (STS) studies, where the notion of agency is extended in a way that is parallel to Schlosser’s first sense of agency. For example, actor network theory (ANT) introduces a neutral term, ‘actants’ to include human and non-human things (including nature, ideas, relationships, as well as artifacts) that contribute to the production of a state of affairs (Latour 1996; Law and Hassard 1999; Sayes 2014). Latour is a leading proponent of ANT and as Sayes (2014) explains: “Latour (2005: 71) maintains that one need only ask of an entity ‘[d]oes it make a difference in the course of some other agent’s action or not? Is there some trial that allows someone to detect this difference?’ If we can answer yes to these two questions, then we have an actor that is exercising agency—whether this actor is nonhuman or otherwise” (p. 141).
In this paper, we draw on Schlosser’s two notions of agency, referring to them as causal agency (broad) and intentional agency (narrow), and we add a new type, which we call triadic agency. Triadic agency is introduced as a way of capturing the type of agency that is at work when humans act with technological artifacts. We use triadic agency as a heuristic device to reveal the role of humans and artifacts in producing events. Although nature (e.g. ecosystems) and ideas (e.g. fairness) shape what happens, our focus is on humans and artifacts. As we will explain later, triadic agency not only expands what can be said about them, it grounds ascriptions of responsibility, both ethical and legal.
As already explained, causal agency has to do with causality. When artifacts are said to be agents in this sense, the emphasis is on their causal efficacy. This usage draws attention to the important role that artifacts have in the causal chain that produces states of affairs. This type of agency fills a gap since too little attention has been given to the ways in which technology shapes what happens in the world. Technological artifacts can powerfully affect social arrangements, relationships, institutions, and values. Treating artifacts as agents properly frames them as significant constituents of the human world.
Turning now to the VW emission fraud case, we can illustrate causal agency for in this case an artifact, i.e., software, had an essential role in the wrongdoing. In 2015, the US Environmental Protection Agency (EPA) discovered that diesel engines sold by VW in the US contained a defeat device, that is, a device “that bypasses, defeats, or renders inoperative a required element of the vehicle’s emissions control system”, as defined by the Clean Air Act. According to US officials, the defeat device software was able, by means of sensors, to detect when the car was being tested. That is, the software included instructions that activated equipment that reduced emissions by adjusting catalytic converters and valves when the car was being tested. The same software turned such equipment off during regular driving, possibly to save on fuel or to improve the car’s performance. This meant that the emissions from the cars increased above legal limits, even 40 times the threshold values.
So, in this case we have an example of software that is causally efficacious; the defeat device controlled the VW engines so that they would (seem to) pass the EPA test. Hence, the device is an agent according to the definition of causal agency. The case is fitting here because it is analogous to Bostrom’s disaster scenario insofar as it involves software that harms humans.
Although the defeat device fits causal agency, the agency of the defeat device has not been emphasized in discussion of the case and no one (to our knowledge) has suggested that the defeat device was responsible for the fraud though it was an essential element in making the fraud possible.
The second type of agency involves the capacity for intentional actions, that is, intentional agents are entities that act intentionally. In traditional accounts, only humans can have intentions. Since intentions are seen as mental states, artifacts do not, strictly speaking, have intentional agency. Computer scientists and others sometimes attribute intentions to artifacts but such attributions are metaphorical; artifacts are spoken of as if they had intentions. Some suggest that at some pivotal moment in the future, AI artifacts might come to have something comparable to human intentions but this is highly speculative. Intentionality in artifacts is only metaphorical. Hence, if someone were to say that the VW defeat device was an agent in bringing about the fraud, this would either mean causal agency or would have to be interpreted as intentional agency metaphorically.
Causal agency and intentional agency share the element of causal efficacy, but in Intentional Agency, the agent’s intentions begin the chain of causality. Importantly, intentions and intentional action are linked to responsibility though the connection is complex. In ethical and legal contexts, the presence of particular types of intentions determines the ascription of responsibility.
Returning to the VW case, in the blame game that ensued after the fraud became public, top management and the engineers were targeted as the entities that were possibly responsible for the fraud. As said before, causal agency is not sufficient to initiate a discourse on responsibility. For that, intentional agency is needed, which is why the focus in the VW case has been on humans, i.e., VW top management and engineers. Top management claims that the decision to use the defeat device was made by the engineers once they realized that the engines on which they were working would never meet the EPA standards without significant improvement (i.e., investments by the company). Allegedly, not wanting to be bearers of bad news to their higher-ups, the engineers handled the problem on their own, keeping the engines as they were, but adding the defeat device (Smith and Parloff 2016). On this account, top management may not have had the intention to break the law, though the engineers did.
Nevertheless, in the public debate about the case, intentional actions of top management come clearly into play. Top management acknowledges that they specified both goals on which the engineers acted (to achieve a particular level of performance for the car and to meet the EPA standards). Intentionally setting these goals and intentionally creating a corporate culture in which engineers feared the consequences of failure (and did not want to tell top management that these goals could not be met) can be seen as setting off the sequence of events that led to the fraud. The point is that the issue of responsibility depends not just on causal sequences but on intentions and intentionality.
Causal agency and intentional agency each capture something important about how states of affairs in the world are produced. However, neither alone adequately captures the full story. For example, in the VW case, causal agency covers only the causal efficacy of the defeat device but not the intentionality of top management and the engineers. Intentional agency covers the intentionality and causality of the humans, but not the contribution of the artifact that the humans used to realize their intentions. The defeat device played a role here both in shaping the intentions of the engineers and in making the fraud possible. Although a simple combination of the two might seem to solve the problem, it would not capture how the two work together, that is, how the intentionality of human actors and the efficacy of artifacts interact with one another to produce results like the VW fraud.
We propose a combination that we refer to as triadic agency. We are not the first to develop a multi-component account of agency: the above-mentioned ANT is an example of a account involving a network of components. Our proposal is to use triadic agency to analyze events involving technological artifacts in a way that draws attention in particular to users, designers, and artifacts—the most powerful agents in this kind of event (Fig. 1).
Triadic agency is especially helpful in sorting out responsibility.
When humans act with artifacts to achieve goals:
The user (or users) wants to achieve a goal and delegates the task of achieving that goal to the designer.Footnote 1
The designer (or designers) creates an artifact in order to achieve the goal.
The artifact provides causal efficacy necessary to achieve the goal.
In the VW case, top management (representing the company) were the users, they had the goal of creating a car that would meet EPA standards while also meeting certain performance standards. The engineers were the designers—they were tasked with creating a car that would fulfill the goals of the top management and they did so by creating an artifact that would achieve the goal. They created the defeat device. All three together contributed to the achievement of the goal to pass the EPA test while also meeting performance standards and all three were essential to producing the emission fraud. All three are part of the agency of the illegal action.
This triadic account of agency allows us to identify agency in producing states of affairs while at the same time acknowledging that it is neither humans nor artifacts alone that do this. Humans and artifacts work together, with humans contributing both intentionality and causal efficacy and artifacts supplying additional causal efficacy. When users delegate to designers, they do so with the intention to achieve their goal and when designers accept the task, they intend to complete it, that is, they intentionally create artifacts that will achieve the delegated goal.
To illustrate the value of adopting a triadic agency account of agency, we can use it to think through issues of responsibility and in particular issues of responsibility with regard to increasingly autonomous artifacts. We will not limit ourselves to current technology but will show how triadic agency addresses issues of responsibility when it comes to fully autonomous futuristic artifacts.