1 Agency, reconsidered

Human beings are agents. We can do things, things that we call actions. Part of this story is that we, in some sense, cause our actions. How exactly this plays out is of course rather complicated, but the general idea is easy enough to understand. However, is it possible to maintain an understanding of genuine human agency in the face of increasing technological encroachment in our lives? When faced with, for example, targeted adverts and artificially curated social media timelines, is it still possible for us to exert our agency?

Here I will therefore investigate what it would mean to take these environmental factors seriously. The worry of “hypernudges”, for example, are especially concerning in this regard [14, 17]. While traditional ‘nudges’ were understood as a way to guide or influence behaviour without changing economic incentives or forbidding any options, hypernudging is argued to be far more persuasive. Coined by Karen Yeung, the effectiveness of hypernudging stems from its pact with Big Data: algorithmically driven systems harness the informationally rich reservoir of online human behaviour to “guide” our actions and patterns of thought. Hypernudges are therefore “highly potent, providing the data subject with a highly personalised choice environment” [17], and in this way come to regulate, by design, the choice architectures that are available to the agent. All this is to say that with the threat of ‘hypernudges’, it makes sense to take seriously whether, and in what way, our agency might be compromised. Significantly, this turns on what conception we have of agency in the first place.

For example, if we are simply reacting to our environment as opposed to really exercising our agency, then this really does seem like a problem if you think that agency is about exerting our will. However, if our agency is indeed of the reactive kind, then this threat is not to agency itself, but rather to certain effects of our agency.

In this paper I will go through an argument for reactive agency. While this argument reveals some interesting things about human agency, I find it ultimately unconvincing. The stress this way of characterizing agency places on context is welcome, and of course has implications for our relation to technological systems. Our technological context can frustrate our agency, and our ability to put our values into action. In this way we cannot divorce agency and from the context in which it is exercised. The account of agency I will critically engage with here admits that practical reason is more about “picking up and continuing threads of histories that the world offers” than it is about actually doing anything [1]. In the end, however, I move away from such a reactive characterization of agency, while trying to keep those parts of it that I find useful. What is useful about this account is that it presents us with a potential solution to the ‘new’ problem of pervasive technologies. While more traditional accounts of agency might see the ubiquity of new technologies as a threat to agency, a reactive account bakes these environmental factors into agency. While I argue it goes too far, what I take from this account is that our environment offers us various possibilities for action, and that we ought to take this seriously in our thinking both about agency and about the impacts of technology. Moreover, there is something to learn from our tendency to ‘fall’ for various ‘temptations’ in our environment, and this justifies further reflection on not only the design of different technologies but also whether such technologies ought to exist at all.

I will, first, introduce the idea of technological artefacts mediating reality, and thus moulding the kinds of actions that are possible for us. Second, building on this, I introduce the idea of an affordance, which helpfully explains how these “possibilities for action” can be understood. Third, I introduce a reactive and interactive account of agency which instead of viewing this contextual intrusion as a bug rather sees it as a feature of our agency. Fourth, I consider some objections to this account. Last, then, I reflect on what this way of seeing human agency means for our relation to technology.

2 Technology and mediation

The first thing we need to get a handle on is the way in which technological artefacts mediate reality. This is not the almost trivial point that technologies can be intermediaries in our relation to the world. Rather, this is a substantive and general claim about how technology shapes how the world appears to us and our possible interactions with it. For example, with the invention of the first microscope, the class of things we could know about the world greatly expanded. Suddenly we could see more, and the scope of what we had access to had increased. New parts of our world were rendered visible and potentially understandable to us. This knowledge, however, was made possible by the microscope itself: It made what was once too small to be visible, visible. It thus mediated our relation to the world, and in that process, changed the world. Of course, this does not mean that any facts of the matter in the world actually changed in the sense that something new came into existence that was not there before. Rather, our ability to get at those facts of the matter changed. Humans, without the use of technology, do not naturally arrive at the best possible explanations for how the world works. We make use of tools, and these tools can help us provide better explanations for how and why things occur. The deeper point, however, is that these tools also change the world and how it appears to us. We gain access to deeper, or at least different, parts of reality, and technological mediation is one way in which we can explain this process.

To use another example, which makes the point in a different way, we can look towards a curious instance of rake design in the city of Cluj, in Romania. In 1996, the mayor of the city wanted to shorten the length of rakes [16]. Apparently, the existing rakes encouraged a most unwelcome kind of behaviour: employees could lean on them. According to the mayor, the way to combat this kind of laziness would be to shorten the rakes so that this leaning would no longer be possible. In a sense, shorter rakes forbid the workers from leaning on them. Longer rakes encourage laziness and shortening them would be a way to get around this problem. By changing the design of the rakes through some intentional intervention the mayor hoped to achieve some productivity goal. However, notice that before the mayor even made his intervention the rakes, unintentionally, offered a use that went beyond their specifically designed function. The first thing to note about this example, therefore, is that technology can have unintended effects, and may provide the possibility for performing certain actions that were not anticipated by those who design them. Second, it shows how different designs of technologies can present different possibilities for action. This is the topic of my next section.

3 Technology and affordances

In this section I want to introduce the concept of ‘affordance’ as a nuanced way of thinking through the ways that our technological environment can offer various possibilities for action. This account is particularly useful for my purposes as it is relational: it cashes out these possibilities for action as being closely related to the intentions and capabilities of the subject.

The literature on affordance initially comes from work done by James Gibson (1979: 127), who coined the term and defined it as:

What it offers the animal, what it provides or furnishes, either for good or ill. The verb to afford is found in the dictionary, the noun affordance is not. I have made it up. I mean by it something that refers to both the environment and the animal in a way that no existing term does. It implies the complementarity of the animal and the environment.

Here, the concept was put to use in explaining how different environments might offer or suggest different kinds of actions from different kinds of organisms. The literature has expanded significantly, with applications in human-computer interactions, science and technology studies, perceptual psychology and cognitive psychology [4,5,6,7, 15]. It is far beyond the scope of this paper to survey this literature, fascinating as it is. My purpose here is merely to introduce the concept as a useful segue into thinking about a reactive account of agency. Below I will go into some more detail with regards to the concept of affordance, and how it relates to this overall project.

As noted above, the concept of an affordance was initially used in ecological psychology. Here, it was operationalized to show how different environments ‘offer’ various potentialities of action for a given organism. Gibson used the term to refer to perceived opportunities to engage with objects in the world (1979). The novelty of this account, at the time, was its emphasis on the fact that perception is not viewed as the passive interpretation of environmental information. Rather, perception is to be understood as active and directed, in that our activities are goal-directed, and we do not merely perceive the world, but we also perceive the possibilities for action that our world presents to us.

Affordance can be construed as a way of describing the possible actions an environment offers to an entity. For example, a desk may afford writing, reading, etc. for an adult human. However, for an animal the same desk may afford shelter. Which type of affordance is more salient depends on the characteristics of the entities in question (human, animal, or machine). Given knowledge about the entities involved, we can make reasonable inferences as to what a given object might afford (such as knowing that animals are unlikely to use a desk for writing). In this way, the affordances an artifact embodies depends both on its physical makeup and on the characteristics of the subject perceiving it. Moreover, we can imagine that shared cultural history and social learning would also come to play a role in the kinds of affordances that agents would find to be most salient (Ramstead et al. [13]. For example, for chimpanzees, rocks may afford the cracking of nuts, but for lizards they may only afford basking in the sun.

The concept of an affordance, however, came to have explanatory purchase outside of Gibson’s academic home of ecological psychology. For example, Norman [10] emphasized the role of perception in the concept and brought it to bear on questions in design studies. He [9] characterized it as follows:

The term affordance refers to the perceived and actual properties of the thing, primarily those fundamental properties that determine just how the thing could possibly be used. A chair affords (“is for”) support and, therefore, affords sitting. A chair can also be carried [9].

Here we have a distinction between ‘real’ (actual) and ‘perceived’ affordances. Real affordances are the “functions attached to a given object”, and perceived affordances are those that are salient to users [4]. One finds many competing accounts of what affordances are, with some arguing that Gibson’s characterization gives artefacts too much efficacy, and conversely, that Norman’s perceptual focus means that artefacts only afford what users perceive them to afford ([4]: 242). Notwithsatnding these difficulties, what has emerged is a shared acknowledgement that not all affordances are created equal, and that they therefore operate by degrees ([4]: 242).

This allows us to reflect on how they work, not just what they are [4]. Understanding the force of an affordance might mean we reflect on whether it requests, demands, allows, encourages, discourages, or refuses being used in a particular way [4]. For example, a low wall around a property discourages trespassing, while a high wall with electric fencing refuses trespassing. To bring things closer to the main themes of this dissertation, we might also think that wearing self-tracking technologies encourages us to keep more physically active. They encourage us by fostering a particular kind of action or pattern of thought, such as monitoring our movements throughout the day. These devices, and what they afford, make some kinds of actions or thoughts more likely than others. For another example, consider speedbumps: these artefacts request that drivers slow down, by creating discomfort or damage should they fail to do so.

The idea behind introducing the affordance concept was to illustrate the ways that our environment comes to influence our ability to act in certain ways, and to provide a mechanism for understanding, explaining, and perhaps even predicting such influence. In the next section I will take this idea of our environment influencing action to the extreme: what if agency really is just a response or reaction to a certain state of affairs in the world? While I will eventually argue against this view, I think that it illuminates some interesting features regarding human action.

4 Action: intentional and reactive

The belief-desire model of agency leaves much to be desired. On such standard theories we might say that actions are due to agents and that these actions can be explained by reference to intentions. However, just because actions are explainable by reference to intentions does not mean that we cannot have explanations that do not refer to those intentions. And so, we might have explanations (or reasons) for actions that are not simply intentional. This opens the door to reasons for actions not being intentional at all, which is precisely the kind of approach advanced by Rüdiger Bittner [1]. In Doing Things for Reasons Bittner argues that “to do something for a reason is to do something that is a response to the state of affairs that is the reason” (2001: 161). Thus, Bittner suggests, the reason for an action might be certain states of affairs in the world, not in our heads.

One might worry, however, that such a reactive account is merely terminological: are we perhaps not replacing the ‘belief’ part of standard theories of action with ‘reaction’? In standard accounts, ‘belief’ amounts to things that an agent might be aware of, and this seems suspiciously similar to what is supposed in reactive accounts: namely, that an agent be aware of some state that is the reason for the action [1, 3]. Bittner has two responses to this challenge.

First, Bittner claims, on standard theories, a belief is a component of the reason an agent does something, whereas on the reactive account, this awareness of some or other state or event is only necessary for that state or event to count as a reason [1]. Once we have an idea of what the agent believes, we are halfway along in our task of providing the reason for the action. However, on a reactive account, having such a candidate list of beliefs does not get us very far in determining what the agent was aware of. Second, according to Bittner, standard accounts require very specific beliefs: namely, those beliefs that show suitable or effective ways to fulfil some desire of the agent. On the reactive account, all the agent needs is some minimal awareness “of the state or event to which the action is supposed to be a response” [1]. Thus, this view is less demanding than other accounts of agency, and only requires that the agent be aware of some state of affairs (so that they are able to react).

However, it seems we often talk of agency in both reactive and intentional ways. For example, say Liana wants to get from her home to the gym, and suppose she believes that taking the car is the best way for her to get to the gym. We can explain her action of driving to the gym as a combination of her wanting to be at the gym and her believing that driving is the best way for her to get there. The reactive account of agency, however, has a different story. On the reactive account, we would say that Liana’s action is a response to her environment: the reasons she has for going to the gym by car are a response to a certain state of affairs in the world (i.e., that the gym is too far to walk, or she does not want to walk, etc.). Both accounts, however, seem to include aspects of the other. In the first case, it is unclear how exactly intentional states are linked to action. For Liana, it is unclear how her thinking that taking the car is the best way to get to the gym somehow makes her a ‘desirer’ of driving to the gym. Moreover, her belief that taking the car is the best way to get there is just some representation of how she thinks things are, and does not by itself give her a reason to do anything [2]. In the second case, it would seem that a fully ‘reactive’ account has difficulty with accounting for false beliefs, in which case we really would like some idea about what an agent thought was going on, and not just information about states of affairs in the world. Towards the end of this paper, I will highlight some further problems with Bittner’s view. Notwithstanding this disagreement, I believe that the significance that this characterization of agency places on contextual features is worth preserving, in some way. More specifically, it allows me to shed some light on the potential impacts that technology may have. Not only that, but this perspective signals a shift in how we come to think about technology, what might be called an ecological perspective on agency and technology. This shift signals that instead of asking what the “effects” of a technology may be (in some strict, causal, sense) we investigate the conditions under which a technology becomes integrated into our lives, and the ways that this comes to influence the kind of society we live in [12]. For example, when reflecting on the ‘effects’ of self-driving cars, we might cite the reduction in traffic accidents and the emergence of responsibility gaps as potential consequences of the implementation of these vehicles. However, such a perspective can gloss over the fact that self-driving cars will not be implemented in a vacuum: their incorporation into our transit networks will require a rethinking of how we organize travel and how we plan our urban environments.

5 Acting, reacting, and agency

In what follows I will, first, provide a rough outline of this reactive account of agency. Second, I show how it accounts for ‘weakness of the will’. Third, I consider some problems with a fully reactive account of agency. Last, I offer what we can nonetheless take from this account.

6 Reactive agency

The best way, perhaps, to get a handle on what is characteristic of ‘reactive’ agency is to reflect on games. Bittner uses the example of the game of chess to pump certain intuitions regarding agency (2001: 64). Imagine, for example, that we are paying chess and that you move your castle in such a way that threatens my bishop. In response, I move my pawn so that it blocks your castle from reaching my bishop. It seems right to say that I moved my pawn because your castle was a threat to my bishop. We might then say, simply enough, that me moving my pawn was a response to your castle threatening my bishop. My action, then, of moving my pawn was a reaction.

The deeper point, however, is to claim that what is true for games is also true for life. The reasons we have for what we do are reactions to states of affairs in the world, much like my reason for moving my pawn was a reaction to you moving your castle. We find countless examples in other games: you bowl a poor ball, and I hit it for a six. You make a defensive error and I capitalize with a goal. Similarly, we find many examples in real life: you cut me off in traffic, I hurl insults at you. You damage a book you borrowed from me: I refuse to lend you books in the future. In all of these examples our actions are responses, and the way that we figure out what the action is a response to is by consulting the specific history of the interaction [1]. This kind of strategy preserves the idea that it is almost impossible to describe what is going on in a situation by only referring to events in a narrow way. Usually, we require some kind of handle on the history of what we are describing: such histories allow us to make sense of which action is a response to which state of affairs [1]. As historians (in this colloquial sense) we are capable of making sense of the world, and of our actions, by telling stories about which action is a response to what. Simply put, then, we can say that the explanation of actions consist in historical explanations, and it is by being components of these histories that actions and reasons come together [1]. This sense of ‘historical’ is of course quite broad and does not refer to that professional class that studies and tries to make sense of the past. Rather, the sense of ‘historical’ that is at play here has to do with the fact that when we try to explain the reason for an action we usually refer to something that is happening or something that happened in the past [1].

What we have in place now is a rough understanding of what ‘reactive agency’ might look like, and how it might explain our reasons for acting. In the next section I would like to outline one of the key upsides to the reactive account developed thus far: a reactive theory of agency can helpfully explain away the problem of akrasia.

7 Weakness of the will

The problem of akrasia (or ‘weakness of the will’) is relatively simple to understand and quite difficult to deny: we (agents) often act against our own better judgement. More than this, we often know that some particular course of action is inferior to another, yet we decide to act in the inferior way in any case. This reactive account of agency, however, reveals that the trouble lies not with weakness but with domination [1].

One of the key features of this reactive account is that it displaces the popular notion of humans fully ‘willing’ our intentions into action. The common sense idea of us as “kings of our souls” is replaced with an account that stresses that we are, for the most part, not fully in control of what we do [1]. The idea that there is some rational homunculus that somehow overcomes our emotions to make us “masters of ourselves” is replaced by a conception of rational agency that is far more holistic: “Rational agents are animals sniffing their way through the world. They are not in control. They are given to what the encounter” [1]. Given what I have said about situational influences, this account makes quite a bit of sense. We are the kind of agents who follow threads that the world furnishes for us, and agency is the ability to pick up on these threads and having the capacity to respond to them. Here we can also see the relevance of the concept of affordance introduced earlier: the possibilities for action that the world present to us turn out to be essential in our thinking about agency. We cannot divorce our conception of rational agency from contextual features of our environment.

The ‘problem’ of akrasia, therefore, is transformed into a feature, not a bug, of our particular kind of agency. Instead of ‘weakness of the will’ being something that we sometimes fall for, we rather see that this ‘falling for’ things is exactly the way that we also get things right. That is, practical reason consists precisely in ‘falling for’ or being ‘tempted’ by different courses of action [1]. Getting it right or being ‘rational’ is just ‘falling for’ the ‘right’ things. For example, we know that all things considered, eating chocolate is not good for one’s health. However, sometimes when we are in the line at the grocery store and, having nothing better to, we decide to reach out and grab a bar of delicious chocolate. Here we might say we are falling to temptation. However, in the case where we resist the urge to purchase the chocolate (based on considerations of your health, etc.), we are simply falling to temptation in the other direction [1]. Of course, it is normal to think that one of these temptations is better than the other, but this does not deny the central point which is that it is in the nature of practical reason to fall for things. According to Bittner, then, the difference between a ‘good’ and a ‘bad’ temptation is a matter of what we pick up in the world.

On a reactive account, therefore, our ‘falling for’ various temptations is not a bug, but rather a feature of our agency: “self-mastery is not an ideal, but an illusion” [1]. Thus, the ‘problem’ of akrasia no longer remains (or perhaps takes on different form). The implications of this for the thoughtful design of artefacts is rather significant: if our reasons for action are states of affairs in the world, it becomes rather important that these states of affairs are helpful and aid us in producing desirable behaviour. If the algorithms that govern social media platforms are driving us apart, or our urban planning is making it impossible for us to act on our values, these issues directly impact our agency. A reactive account of agency can capture this kind of phenomena in a way that standard accounts might not be able to.

8 Problems with the reactive account

While the reactive account has certain upsides and illuminates features of human agency that I think useful (especially with respect to technology), it is also deeply flawed. Here I will raise two issues with this account. Following from this I suggest what the main takeaways are from these reflections on agency.

9 The problem of false beliefs

The first problem is that of false beliefs. On the reactive account above, the reasons for an action are certain states of affairs in the world. The issue here, then, is to explain cases where agents have false beliefs about the world. Say, for example, that I believe it is raining outside and therefore take my umbrella with me on my walk. However, it turns out it is not raining at all. Now, on the reactive account, we are at a loss to account for what my reasons are for taking the umbrella with me. If our reasons for action are states of affairs in the world, but we are wrong about those states of affairs, then it seems I have no reasons for taking my umbrella. But this seems wrong: perhaps I only checked the weather report, but failed to look outside, or I was looking at the weather for the next day, etc. There may be a number of possible explanations for my action, but it might just be that none of them are only about states of affairs in the world.

Moreover, when we reflect on new types of generative AI the problem of false beliefs becomes even more pressing. With the ability of Large Language Models, (LLMs) for example, to engage in ‘conversational’ style behaviour with users, there is a risk that users start inappropriately trusting these systems. Users might therefore develop false beliefs about the intelligence of LLMs (or even to mistakenly attribute capacities such as consciousness to them). LLMs therefore have the potential to generate contexts in which users are incapable of utilizing their best judgement, especially because these systems are intentionally designed to appear ‘human-like’. It remains an open question how a fully reactive account of agency would be able to cope with these kinds of technologies, where we seem to have a genuine threat to human agency.

10 We do not learn much about the act of an agent if agency is only reactive

The second concern is that we do not actually learn that much about an act which an agent performs if we only have the conceptual resources of the reactive account. If the reasons for actions are all ‘out there’, then nothing much is revealed about the agent herself. This strikes me as a problem. While the reactive account is surely useful in its articulation of how our social environment comes to determine action, it seems it errs too far in this regard. For example, if all we have at our disposal when delineating a good from a bad action is reference to temptations, then we lose sight of who is falling for these temptations, and it becomes difficult to explain why they fall this or that way without talking about who they are, or, more specifically, what reasons they had for acting in a particular way. If we really want to maintain the idea that human beings are agents, then there needs to be something about those agents in the explanation of action.

11 What’s good about being reactive?

So, what is good about the reactive account outlined above? Well, as I mentioned initially, such an account stresses the important of contextual factors in our understanding of agency. This is significant insofar as it prompts serious reflection on the kinds of technology that we produce and embed in our societies, as these come to shape the kinds of actions that we might readily perform. While there is certainly a worry about the increasing encroachment of technology into our decision-making, these are not new worries, nor are they concerns that we lack the ability to deal with. However, these new technologies do indeed offer something that is worth taking seriously: they shine a different light on old philosophical questions. While these questions concern old problems, these new technologies have a way of providing us with the opportunity to test our philosophical concepts in novel environments. In a way, then, they do offer us an opportunity to ‘react’, just not in the way that the reactive account of agency above suggested.

For example, let us consider the case of Google Glass, goggles fitted with a camera and hooked up to the cloud. The promise of the technology, at the time of its introduction back in 2013, was to provide a virtual overlay of physical reality, creating a ‘mixed reality’, a blend of physical and virtual. Glass created quite the sensation, not only for the mixed reality future that it promised but also perhaps more significantly for the challenges it presented to the value of privacy. One of the key issues was that it was unclear whether a user of Glass was using the device to record or take photos, and so people became suspicious of Glass users, especially in sensitive contexts, such as on dates or in gym locker rooms. This led some business to adopt ‘Glass-free zones’ in order to protect their clients [8]. More reflectively, it also prompted people to think through new understandings of privacy. According to van de Poel and Kudina the emergence of technologies such as Glass are a challenge to a “control-of-information” conception of privacy (2022: 14). Different perspectives on the value of privacy might, for example, stress its relational or social dimensions, such as “identity building, civil inattention, sharing one’s experiences online, blurring the line between remembering and forgetting” (van de Poel and Kudina [11]. Thus, what we have here is an inquiry into the value of privacy. The authors refer to this as a case of “value dynamism”, which are instances where we have a reinterpretation of a certain value (van de Poel and Kudina [11]. In this case, we might call this technological value dynamism, as it seems some technology is the cause of the re-evaluation. This is relevant for my purposes because it shows how new technologies, such as Glass, can challenge foundational values, such as privacy. It is in this way that new technologies can be generative: they can prompt novel evaluations of ‘old’ concepts or values.

You might worry, however, that I have cherry picked examples that neatly fit into my conclusion.Footnote 1 So let us try one that on the surface suggests that my approach is deficient in an important way. What I have been suggesting thus far, in sum, is that what matters for agency is less about agency itself and more about the context in which agency is expressed. On the standard view of agency technology altering our choice architecture can be seen as encroaching on individual agency. The reactive account, however, merely sees this as a change in the agent’s context.

So, what happens when an agent’s context itself has been manipulated? This is not so difficult to imagine, as just think of different social media platforms altering uses’ voting or purchasing behaviour through targeted advertising. The companies that operate various social media platforms (such as Meta) use their ability to curate how their websites appear to different users in order to generate the maximum amount of profit. They therefore benefit financially from creating a particular algorithmically driven context. The issue is that the focus on context that the reactive account sheds light on is now itself not give: it has been manipulated in a particular way.

To deal with this issue we need a balanced perspective when reflected on agency: it is not simply the case that we can focus on contexts as being statically generated and presented to users. Rather, we have to understand that contexts are constantly changing, and that the way(s) in which they change are not neutral. Careful attention has to be given to who and for what purpose different contexts are produced (such as algorithmically driven news feeds or targeted adverts). Using the framework of reactive agency highlights just how important it is to pay attention to these shifting contexts.

12 Conclusion

Given how our environment influences the ways that we might act, we might be better off adopting a broader conception of agency. One such broad notion of agency is to conceive of agency as reactive: as a response to states of affairs in the world. On this account, agency is not about intentions in the minds of agents, but rather about the ways in which these agents respond to what the world affords them. Our agency is moulded by the environment that we find ourselves in and can therefore not be divorced from contextual factors. However, this account is not perfect, and so I raised two objections against it. Notwithstanding this criticism, however, the point of delving into such a reactive account in the first place was to show how our agency is not necessarily all in our heads and is importantly mediated by contextual features. Moreover, whether we judge different technologies to be a threat or an opportunity for human agency is context dependant: we need to evaluate the salient possibilities that a particular technology might bring about, and then determine what it’s effects might be on human agency. This has serious implications for the design and implementation of technology, as these technological systems can come to shape not only which parts of reality we have access to (such as the microscope) but also the kinds of things we can respond to, and therefore has a direct bearing on the potential actions we can perform.