Power in the digital age is about making things easy

BILL GATES

1 Introduction

Personal autonomy is becoming a central concept when discussing new technologies. Talk of manipulation (Jongepier & Klenk, 2022; Susser et al., 2019), addiction (Allcott et al., 2022; Joseph & Hamiliton-Ekeke, 2016) or different effects on cognition itself, mainly on attention and memory (Small et al., 2020; Vedechkina & Borgonovi, 2021), is becoming commonplace (see also Carr, 2011). These worries make special sense if we look at the socio-economic context in which digital technologies are developed, usually conceptualised as “surveillance capitalism” (Zuboff, 2019b) or “platform capitalism” (Srnicek, 2016).

Platforms constitute a novel kind of business centred on obtaining data from their users, and they have become the main type of habitat of our digital lives. They mediate between agents and provide the ground for their activities, thus gaining “privileged access to record them” (Srnicek, 2016, p. 44) and to structure them through a “designed chore architecture that governs the interaction possibilities” (Srnicek, 2016, p. 48). This means that they are not only analysing the data produced in interactions but also, and crucially, using data to create and structure said interactions. For example, corporations can now engage in Continuous Experimentation and A/B testing in web and platform design, performing controlled experiments to make “evidence-based decisions to guide their software evolution” (Ros & Runeson, 2018, p. 35). As Shoshana Zuboff states, an important aspect of platform capitalism is “behavioural modification”; the efforts by corporations to produce “behavior that reliably, definitively, and certainly leads to predicted commercial results for surveillance customers” (Zuboff, 2019a, p. 18). After all, “the surest way to predict behaviour is to intervene at its source and shape it” (Zuboff, 2019b, p. 202), and this can be done directly through design. Because of their technological possibilities, digital platforms are unprecedentedly suitable for dynamic hyper-designability, meaning that they can be designed to the detail –to the pixel!–, and prone to continuously change their design based on their capacity to steer the behaviour of their users. This is what is at the core of our concerns (and others’, see Agudo & Liberal, 2022; Frischmann & Selinger, 2018; Jongepier & Klenk, 2022; Sahebi & Formosa, 2022; Susser et al., 2019) around personal autonomy.

Given this context, we consider it crucial to pay attention to the role that technology and design play in personal autonomy. It has long been argued that technics (not to mention technologies) are constitutive of the human species (Ihde & Malafouris, 2019; Leroi-Gourhan, 1993), its culture and cognition (Clark, 2010; Malafouris, 2016), or even human consciousness (Ong & Hartley, 2012) and our existential structure (Stiegler, 1998). Moreover, any kind of technology, as Latour (2002) defends, is moral precisely because it constitutes us, as the human beings we are, by shaping our behaviour. Technologies “amplify specific aspects of reality while reducing other aspects” (Verbeek, 2006, p. 365) and in their use “specific actions are invited while others are inhibited” (p. 367). And yet the “hyper-designable” character of (the interfaces through which we use) digital technologies takes this basic idea to a new level. Although most kinds of technologies are used through sensorimotor interactions with them, digital technologies now have a much more flexible and dynamic canvas from which to shape those interactions. Digital “behavioural modification” is done through many different mechanisms, but in many cases it all starts with an effective design of the interface (forms, colours, patterns, size, interaction mechanics, time delays, etc.).

In daily life, this effective behavioural modification seems to lead to the familiar feeling of frustration when finding ourselves using digital technologies in ways that we would prefer not to use them. For example, one could want to spend the coffee break at work video-calling an old friend to share some exciting news, but somehow end up scrolling through Tiktok for the whole break. This could count as a clear example of the “Frankfurt-type” (Frankfurt, 1971) cases that have been classically discussed in the literature on personal autonomy, which tend to be analysed as conflicts between higher and lower-order desires (if I do not desire to desire something that I end up doing, I cannot be said to be exercising my free will). However, if we only focused on an analysis at this level, speaking of conflicting desires in an abstract (and propositional) way, we would be missing a rather fundamental part of the problem. Certain analyses have started to highlight the relational and habitual character of these felt conflicts within digital platforms (see Aagaard, 2015; Marin, 2022). Yet a more complete and richer analysis is still to be made focusing on how this loss of personal autonomy emerges in sensorimotor interaction (with technology).

Enactive-sensorimotor theories of cognition (mainly as developed in Barandiaran, 2008, 2017; Di Paolo et al., 2017; but see also Noë, 2004; O’Regan & Noë, 2001; Thompson, 2005) can provide the tools to ground and naturalise the concept of autonomy in a fundamentally situated way, allowing for detailed, operational and normative grounds for the analysis of the impact of technologies on human beings. Philosophy of mind has traditionally worked under rationalistic/computationalist assumptions (Carruthers, 2006; Fodor, 1980; Newell & Simon, 1976; Putnam, 1965). Enactivism, working instead under a more interactive conception of cognition that arises from our sensorimotor engagement with the world, provides a more suitable framework to face the challenges posed by contemporary technology (particularly since its displacement from exclusively text-based interfaces to screen-mouse, screen-tactile and more recently, augmented and virtual sensorimotor interfaces). What is central to this approach is to understand the mind, and ultimately personal identity and autonomy, as emerging within this sensorimotor domain, deeply constituted by agent-environment sensorimotor interactions, perception–action cycles involving brain, body and environmental structures (including, of course, technology and other agents). What grounds cognition, and in turn personal autonomy, is not primarily the capacity to represent the environment in internal models but the capacity to engage with it in a meaningful manner.

In this paper, we aim to lay down a meeting ground between discussions around personal autonomy, technology and design. That meeting point is the enactive notion of (networks of) habits and sensorimotor agency, where we can ground both an account of technical behaviour and of the foundations of personal autonomy. The structure of this paper will be as follows: in Section 2, we will first briefly review classical literature on personal autonomy to get a rich picture of what is at stake when we talk about personal autonomy. Then, in Section 3, we will offer an enactive account of the foundations of personal autonomy in sensorimotor autonomous agency. Finally, in Section 4 we provide an account of how technology fits within this enactive picture, and how (digital) technologies can be said to, in this sense, diminish or enhance autonomy from their very design.

2 Personal Autonomy: A Three-Dimensional Picture

Famously developed by Kant (2009/1785) as the basis of human dignity, the notion of personal autonomy had an unquestionable highlight in the Enlightenment, in the form of autonomy-as-independence (from others), bearing rationalistic and individualistic ideals (Friedman, 2000). As opposed to this idea, however, feminist philosophies questioned the notion of autonomy, seeking to reconcile its possibility with the fact that we are fundamentally entangled with others and with the world throughout all of our lives (see MacKenzie and Stoljar, 2000b for a compilation). These challenges, to which we add those posited specifically by the technological dimension, ask us to continue discussing the concept.

Broadly speaking, we consider personal autonomy to be the gradual ability and possibility to be in control of our behaviour and of acting in ways that can be said to be our own. Along the many levels into which this intuition can be unfolded, there is at least an obvious double scale of autonomy in place. What Marina Oshana refers to as “local autonomy” has to do with a “transient characteristic” (Oshana, 1998, p. 92) of actions, focusing on the extent to which an agent is “in control” of a particular action. This relates to notions familiar with the concept of agency or the sense of agency (Di Paolo et al., 2017, Chapter 7; Gallagher, 2012; Pacherie, 2007). On the other hand, “global autonomy” is for Oshana a condition that refers to the whole life of a person and not just to particular acts, thus “zooming out” to encompass a broader scale. This can be considered closer to the concept of “authenticity” (see Varga & Guignon, 2020); the particular ways in which agents behave throughout their lives (that are nevertheless obviously subject to change). This distinction between local and global autonomyFootnote 1 also echoes Enoch’s (2022) distinction between autonomy as non-alienation -which he also finds suitable to call “authenticity” (p. 145)-, related to our life being shaped by our values, and autonomy as “sovereignty” related to the possibility of having an (effective) choice in our behaviour. We consider that both scopes can be gradually defined, and although they can be conceptually distinguished,Footnote 2 they are fundamentally interrelated. The possibility to control specific actions allows us to construct an authentic self, which will in turn change how we control our actions.

To gain a wider grasp of what is at stake when talking about personal autonomy, we propose a three-dimensional analysis of different approaches developed within moral philosophy in the last decades.

  1. 1.

    We first distinguish a structural dimensionFootnote 3 of autonomy concerned with the synchronous integration of mentalFootnote 4 states to qualify an action as autonomous or free. Frankfurt’s (1971) account of second-order desires is its main representative. He postulated that proper “freedom of the will” (or autonomy) occurs not when we merely do what we desire, but when we do what we desire to desire. It is not my desire to play Candy Crush here and now what defines my autonomy in doing so. But it is my desire not to want to waste my time with Candy Crush which determines that I am not fully autonomous when compulsively playing it (or that I am autonomous if my desire is to play it because I want to explore the full game to become a game designer). In this sense, we need to attend to the nested hierarchical structure of desires to judge the autonomy of an act. Other authors oppose the hierarchical aspect of these accounts, mainly because of the danger of falling into an infinite regression (Friedman, 1986; Thalberg, 1978). For instance, Marilyn Friedman (1986) offers an account where the important structural aspect is the integration of higher-order and lower-order mental states, all of them contributing to the ascription of autonomy. Structural approaches, in sum, aim to capture the extent to which the agent can be said to “identify” with a particular act or to “endorse” it, attending to the structure of her psychological states in the moment of acting.

  2. 2.

    We can also distinguish a temporal dimension in the literature, developed in part to answer some of the problems that synchronic approaches to autonomy encountered. In a “forward” temporal direction, Bratman (2000) aimed to shed light on the notion of “identification” by remarking that we “identify” with a desire/act when it is integrated with our long-term plans. His defence rests on claiming that human agency is not static, but inherently temporal; we are agents fundamentally able to project and plan forward in time, and to set goals for ourselves. A normative character then arises that goes beyond the here and now of a synchronic approach to desires. In the “backwards” direction, in turn, Christman (1991) offered historical-counterfactual criteria to identify autonomous desires; we can say that a desire is autonomous if we did not resent its acquisition (or if we would not resent it). This taps into a rather important aspect of personal autonomy for our purposes; the developmental aspect.

  3. 3.

    The relational dimension, brought forward by feminist philosophers, is concerned with the role of the (mainly social, although see Anderson, 2022 for a review of autonomy and scaffolding) context in which human autonomy is developed and exercised. It has been explored in different ways, both as a necessary but merely causal influence on autonomy, and as a fully constitutive one (Stoljar, 2022). What relational views have in common, nevertheless, is rejecting the assumption that autonomy is achieved when we can ignore or overcome –conceptually or practically– the particular ways in which agents are embedded in a relational context. In this sense, for Oshana (1998), part of what defines an agent as autonomous is the extent to which her relational context offers a sufficient range of relevant and feasible courses of action from which she can choose. In this sense, the environment of an agent cannot be excluded from an evaluation of her autonomy, nor would it even make sense to speak of personal autonomy as something prior to this environment. If the relational context reduces the possible courses of action, this reduction amounts to a diminishment of our personal autonomy, even if our mental states were to be synchronically and temporally integrated according to the two other dimensions.

Furthermore, the (social) environment also plays a role in providing a suitable context for the development of inherently interpersonal characteristics that are minimally needed for autonomy (Benson, 2000). As Wolf (1988) proposes, a “sanity” condition is needed to ascribe autonomy to the agent, understood as “the minimally sufficient ability to cognitively and normatively cognize and appreciate the world for what it is”. In a similar vein, many feminist philosophers have stressed the need for certain self-regarding attitudes to develop the capacity of autonomy, such as self-trust (McLeod, 2002). These substantial conditions hint at an interrelatedness of the three dimensions; the kind of embeddedness in a certain relational context [relational dimension] determines the extent to which an agent is able to develop a somewhat integrated subjectivity [temporal dimension] that allows considering the agent as “endorsing” her actions [structural dimension].

Although we have done a separate analysis of the three dimensions, almost contrasting them, what we wish to highlight is that autonomy encompasses structural, temporal and relational aspects that can help to illuminate the relationship between the two scales of autonomy previously described (local and global). My ability to control a specific action is part of a broader developmental process (that depends on these local actions) shaped by a specific relational context. Given this interplay of dimensions and scales, we would profit from an operational definition that, from the very start, can accommodate and naturalise them. Enactivism, and in particular the notion of habit, can help us with this task.

3 Cognitive and Sensorimotor Foundations of Personal Autonomy

From Kant to Dworkin the philosophy of personal autonomy has been greatly influenced by rationalist assumptions and, more recently, by computational and representational functionalist framings of the mind in terms of internal propositional states (beliefs, desires, reasons, etc.). In a manner that is often opposed to (and at times complementary with) such explanations, the so-called 4E (Embodied, Embedded, Extended and Enactive) approaches (Calvo & Gomila, 2008; Di Paolo et al., 2017; Shapiro, 2011; Varela et al., 1991) have led the foundations to ground and naturalise new conceptions of the mind and personal autonomy. Particularly, a reappraisal of the concept of habit (Barandiaran & Di Paolo, 2014; Carlisle, 2014; Caruana & Testa, 2020) has recently made it possible to re-conceptualize autonomy from the sensorimotor (Barandiaran, 2008, 2017; Di Paolo et al., 2017) to higher cognitive and social domains, including the moral notion of personal autonomy (Di Paolo et al., 2018; Maiese, 2022). However, the extension of enactive theory to digital environments and its connection with moral philosophy’s approach to personal autonomy remains underexplored. This section will be devoted to laying out the main aspects of an enactive theory of sensorimotor agency, starting from an explanation of the notion of habit and exploring how it encapsulates the three dimensions of personal autonomy to render it applicable to technological embeddedness.

3.1 Habits: Three-Dimensional Building Blocks for Personal Autonomy

Habits have been an object of philosophical interest throughout history, from reflex-like automatisms in associationist and behaviourist traditions to more dynamic self-organising structures in organicist traditions (Barandiaran & Di Paolo, 2014). Drawing inspiration from the latter, habits are understood in enactive theorising as self-sustaining and precarious sensorimotor-schemes that structure our mental life (Barandiaran, 2008; Barandiaran & Di Paolo, 2014; Di Paolo et al., 2017; Egbert & Barandiaran, 2014). Sensorimotor-schemes, a term borrowed from Piaget, are defined as patterns of sensorimotor coordinations (loops or perception–action cycles involving brain, body and environment) that are arranged in particular ways and end up being stabilised by the frequency of their enactment and their effects (Di Paolo et al., 2017, p. 57).

Importantly, habits are precarious: they depend on the continuous enactment of coordination patterns (and their effects) to exist and persist as somewhat stable entities. In the absence of this enactment, they would die out. This brings forth an intrinsic normativity that can be cast in terms of viability conditions (Egbert & Barandiaran, 2014). These can be defined as what a habit requires to maintain (enact, recur, reinforce) itself, and depend on the different “support structures” that sustain it and that are situated alongside the brain-body-environment continuum (including neuronal and musculoskeletal features and coordinations, as well as environmental features of various kinds). This implies that environmental (support) structures can directly constrain and shape habits.

For example, the sensorimotor scheme (or habit) of reading a text and clicking and opening links in new tabs is constituted by different patterns of coordination, each sustained by a diverse set of sensorimotor and environmental correlations defining the viability of the whole habit. Support structures of this habit include: the various movements of your muscle fingers to physically drag around the mouse, the proprioceptive and auditory feedback when pushing one of its buttons, the emergence of the new tab on top of the browser’s bar, the relief experienced when offloading the retention in the memory of the pages that I want to read later, and ultimately the reinforcement of the experience of moving to the new tab (or closing all down because time is over).

Albeit in a very preliminary form, we can already see the three aforementioned dimensions of personal autonomy at play in the concept of habit so-defined: 1. they are individuated as integrated structures of coordination patterns [structural], 2. through their continuous enactment [temporal], 3. in the environment that co-defines them [relational]. Taking the habit as a starting point, then, inevitably highlights all of the three dimensions of personal autonomy in a naturalised (operational) manner.Footnote 5

3.2 Sensorimotor Agency

Habitual sensorimotor schemes do not appear in isolation but develop in cohesive networks. As with coordination patterns, when habits are frequently enacted together they stabilise networks of sensorimotor schemes that are also precarious and self-sustaining. We can now talk of activities, which entail a normativity of their own, nested with but still distinct from the normativity of each sensorimotor scheme. For example, scrolling down is a sensorimotor scheme that, together with other habits such as clicking on a news site from your “favourites” list, skipping the sports section, moving towards the section of your interest on the top menu, opening the interesting news in new tabs while scrolling down, etc., can constitute the activity of reading the news. Moreover, at this level, we can see how some habits are easier to enact together with others, thus becoming more easily clustered in particular activities. For instance, the nested set of habits we have just described can be done using the mouse with the right hand while holding a cup with the left and tilting the chair back with each sip of coffee (thus forming a “style” of reading the news every morning).

Sensorimotor schemes, then, relate to each other in complex ways, in relations of mutual support, sequencing, inhibition, consistency, redundancy, etc. We can talk about novel normative dimensions at the activity level, such as the “efficiency”, “robustness”, “coherence” or “elegance” of the network, among other considerations that now become meaningful (Di Paolo et al., 2017, p. 156). It becomes already easier to identify the particular phenomenological feel that these normative considerations have for the agent. For example, among the many different possible networks that can become stabilised for keyboard and mouse use, there are particular configurations that feel more efficacious or elegant for each of us. Another related phenomenological aspect that arises at this level is the feeling of “flow” (see Csikszentmihalyi, 2013) or immersion in performing an activity (more on Section 4.2).

These clusters of habits form even bigger and more complex networks, webs of networks that define a novel form of agency –different from biological agency–; sensorimotor agency, (Barandiaran, 2004, 2008) or sensorimotor life (Di Paolo et al., 2017). At this level, we find a minimal form of personal identity that is individuated by its own activity in complex and refined ways. We encounter an agent that acts by itself by asymmetrically regulating its interactions with its environment to preserve its sensorimotor identity as a web of networks of habits (Barandiaran, 2008). There is already a self, a locus of identity, that becomes the source of its own actions, in a manner that is sensitive to and endowed with intrinsic norms or meaning.

It is at the level of this new form of life, and not at the level of single habits or networks of habits, that personal autonomy emerges as a meaningful concept. Therefore, to say that personal autonomy is grounded in habits, or to analyse the role of habits in personal autonomy, is not to say that personal autonomy exists “as such” at that level. On the contrary, as we have seen, even at a local scale personal autonomy is related to agency, to controlling action. And this “controlling” of actions is done according to a normativity derived from the identity of the whole web (the sensorimotor agent), not of the single habit. Reading the news on the computer every morning occupies a different place in my psychological identity than in that of a professional journalist, for whom that bundle of habits is much more central to the specific ways in which she relates to the world and makes sense of her life and of who she is as an agent, thus acting to maintain it. This also allows us, as Ramírez-Vizcaya and Froese (2019) do, to talk about “bad” habits. A “bad” habit would be that which can “take over” the topology of habits that constitute the agent and which “jeopardizes or severely restrains the expression of some of the person’s regional identities that are relevant for her overall well-being” (Ramírez-Vizcaya & Froese, 2019, p. 8).

Importantly, most habits are social at multiple and nested scales in humans.Footnote 6 The tools I use to navigate the web (from the keyboard to the browser) are the result of social production and knowledge: using those tools, as well as reading and understanding the news, is a socially enculturated skill; news agencies are social institutions; we read and interpret the news as members of a society (I read the news knowing that others also do so, virtually bringing others to my interpretation, fearing that I might miss some important news that will become tomorrow's main conversation topic at work, etc.).

Socio-technically constituted, the specific ways in which my identity is constituted by the concrete articulations of different networks of habits (and their dynamic evolution) entails a normativity in its precariousness. Being a sensorimotor identity is enacting a sensorimotor identity, and a complete breakdown that makes me cease to enact the web of sensorimotor schemes that constitute me as an agent would imply progressively losing my identity. Personal identity is, in this sense, performative.

3.3 From Sensorimotor to Personal Autonomy

There is a continuity between a raw sensorimotor autonomy or identity and full personal (psychological and ethical) autonomy that humans develop in complex socio-technical environments. We want to remark that sensorimotor interactions (including linguistic and technical) are essential all the way up to full personal autonomy in subtle ways that remain invisible to the rationalistic and individualistic approaches to autonomy that start from an already complex and abstract notion of self-control (see Maiese, 2022 for a similar critique). The rational and socio-technically individualised self is itself the result of complex interactive relationships –ranging from the incorporation of nested regulatory mechanisms of social coordination (see Di Paolo et al., 2018) to the recurrent structuring effects of interaction dispositives and practices on the shaping and encapsulation of the self (see Foucault, 1988).

We can understand (the basis of) personal autonomy in the sensorimotor domain as the continuous ability to regulate our coupling with the environment (and ourselves and others) according to the norms that emerge from the sedimented effect of our previous regulated interactions. This includes both the local scale related to control and, as a recursive or performative result of it, a more global scale of authenticity. The notion of networks of habits and their normative considerations guarantees this continuity from local to global scales. The sensorimotor agent, defined as an adaptive web of sensorimotor schemes, has a normativity of its own and is individuated through the continuous enactment of the (networks of) habits that integrate it (Barandiaran, 2008; Barandiaran & Moreno, 2006; Di Paolo et al., 2017). The persistence of the agent’s identity thus understood, requires that adaptive asymmetrical regulations take place: the agent navigates its world so as to avoid risks to its precarious network of habits and to favour the interactions and environment that strengthen and enrich it. As we can see, then, there is no need here for specific forms of rational reflexivity to account for the emergence of a complex enough basis for personal autonomy. Again, this is not to deny that reflexive self-control (rational or otherwise linguistic) will also emerge in human beings constituting a fundamental dimension of human personal autonomy. Rather, it means that personal autonomy is not something that bootstraps itself solely out of rational reflexivity, but something that emerges from and is continuously dependent on the sensorimotor domain. This will allow us to see how influences, relations or constraints operating at the sensorimotor level can enter into the personal autonomy discussion.

We can now further develop an enactive account that echoes the distinction found in Oshana and Dworkin between local and global autonomy. Local autonomy would be described as an asymmetry (in favour of the agent) in the locus of regulation of the agent-environment coupling (see Barandiaran, 2008; Barandiaran et al., 2009; Di Paolo et al., 2017 for further explorations of the notion of asymmetrical interaction as a requirement for agency). In turn, a more global notion of sensorimotor autonomy would be defined as the long-term developmental sedimentation and coherent integration of recurrent agentive regulations. It is important to say that both are necessary –but not sufficient– conditions for our account of autonomy; although not as static and ever-present conditions, but as conceptually necessary and temporally extended. As noted by Dworkin (1988), we do not need to always have local autonomy to be considered globally autonomous; but we will need to have been locally autonomous at least at some point to end up being globally autonomous.

TABLE OF DEFINITIONS

 

Ethical domain

Sensorimotor domain

Personal autonomy

the gradual ability and possibility to be in control of our behaviour and of acting in ways that can be said to be our own

the continuous ability to regulate our coupling with the environment (and ourselves and others) according to the norms that emerge from the sedimented effect of our previous regulated interactions

Local autonomy/ Agency

the extent to which an agent is “in control” of a particular action

an asymmetry (in favour of the agent) in the locus of regulation of the agent-environment coupling

Global autonomy/ Authenticity

the particular ways in which agents behave throughout their lives

the long-term developmental sedimentation and coherent integration of recurrent agentive regulations

Identity

the cumulative, performative effect of our past and present behaviour (in all its complexity) and dispositions

the global organisation of webs of networks of habits that have got stabilised throughout a lifetime

4 Technologically Mediated Sensorimotor Schemes and Digital Interface Design

4.1 Technical Behaviour and Designed Support Structures

We have seen how the environment takes a constitutively important role in the enactment and stabilisation of sensorimotor schemes in the form of environmental “support structures”. However, for human beings the recursive mesh between behaviour and environment is particularly deep (Malafouris, 2019); human environments (and bodies!) are fundamentally the product of human technical behaviour. Human beings actively transform and organise the environmental support structures of their habits as a way to regulate their sensorimotor interactions. We behave technically when we actively transform and organise elements of our body and environments with the intended effect of constraining or regulating couplings with (or between) other aspects of the environmentFootnote 7: you train your body to be efficient at hunting, I structure the workshop to organise the workflow, we make fire to cook, they sharpen the stones to cut wood. Techniques are tied to the mastery (understood as goal-directed, or normatively sensitive, coordination of well-established bundles of habits) of the agents in their sensorimotor interactions both creating and using such transformations of the environment. On the other hand, technologies can be understood as sedimented effects of this technical behaviour at different scales and within a systemic context. Transformations can become embodied and individuated in artefacts distanced from the immediate regulatory capacity of their surrounding agents (whose bodies, dispositions and behaviours are often both the vehicles and the product of technological systems).

So, the environment that is constitutive of habits, and of sensorimotor schemes more generally, is not a uniform environment. We can differentiate between “natural” support structures and “technical” support structures of sensorimotor behaviour. The former would be those that support the stabilisation of a habit without having been actively transformed at some time to do so. An example of habits involving such “natural” support structures would be, for example, those involved in collecting apples directly picking them from the branches, or shaking the tree so that unreachable apples from upper branches fall off to the ground.Footnote 8 Another “natural” support structure would be the paths that “naturally” form in the grass when repeatedly walking on them; although a product of sensorimotor behaviour, they are not a product of technical (regulatory) sensorimotor behaviour (unlike signs, fences or pavement). On the contrary, technical support structures would include any support structure that has been actively selected, organised or transformed to regulate sensorimotor interactions (at least originally) and continues to exist as a sedimented effect of that transformation. For example, taking a fallen branch of a tree, cutting it with a curved shape in one end, and then using it to collect the upper branches’ apples. The sensorimotor scheme of apple collecting is supported by a structure that has been actively transformed to do so, and this is a way of regulating that sensorimotor scheme. The carved stick can then remain as an embodiment of this regulation even when the agent is no longer there.

The possibility of transformations “surviving” their original agents takes us to the obvious conclusion that, in many cases, the support structures of our interactions with the environment have not been transformed by us, but by other agents. These could have taken place to regulate other agents’ couplings, or they could specifically be aimed at regulating my current coupling; they are in this case support structures designed for me by others, which inevitably includes a degree of extrinsic regulation of our interactions. This forces us to distinguish between:

  • Autonomy-diminishing technologies: When support structures impose rigid and non-regulable sensorimotor interactions for agents, being powerful enough not to allow the agent to perform further regulations of the interaction, a certain degree of autonomy is lost for that agent. These kinds of technologies would, by taking the locus of regulation away from the agent, hinder the continuous enactment of her capacity of autonomy (local and, in turn, global).

  • Autonomy-enhancing technologies: The fact that technologies are designed by other agents doesn't automatically make them detrimental to her autonomy. On the contrary, many sedimented transformations of the environment enable, signal, facilitate or augment the capacity to regulate behaviour or to transform the environment further to do so. The agent can still be in control of her sensorimotor interactions involving these technologies, rearranging them or other aspects of the environment through them. These would be technologies that open up new ways of enacting the capacity of autonomy.

Having made this distinction, we can turn to explore how digital technologies in particular can (and are currently being designed to) diminish or enhance personal autonomy. Although a proper analysis of the relationship between (digital) technologies and personal autonomy could (and should) run deep into the social, cultural, or economic dimension, we shall here mostly focus on aspects of their interface that are rarely analysed in depth, but remain nevertheless constitutive of more intricate forms of domination or liberation.

4.2 Autonomy Diminishing Digital Interfaces

The intimate but pervasive character of digital technologies enables them to take a highly active role in the agent-environment coupling at the sensorimotor level. When we navigate through a social media platform, for example, all of our actions are made through a sensorimotor interface (screen, mouse and keyboard on a PC or the visual and tactile screen of mobiles and tablets). The digital surfaces of such interfaces are dynamically and thoroughly hyper-designed to guide our behaviour, to steer our coupling with the environment. In fact, the design industry does not only speak of interface design but about behavioural design (through interfaces) or, in a deeper phenomenological sense, of user experience design (coined by Norman, 2013).

A first example of autonomy-diminishment through interface design can be seen in the case of so-called “dark patterns”, originally defined as “a user interface carefully crafted to trick users into doing things they might not otherwise do” (Brignull, 2013). In a review of different examples of dark patterns, Gray et al. (2018) propose a taxonomy of dark patterns, and while some of them function merely by manipulating text features (wording or phrasing), in most cases what the design pattern “manipulates” is of a direct sensorimotor nature. For example, repeatedly interrupting the user through pop-up messages or buttons (what the authors call a form of “nagging”), obstructing certain actions by making them more difficult -in terms of pure sensorimotor coordination-, or manipulating the interface to favour specific actions over others (Gray et al., 2018).Footnote 9

As Mathur et al. (2021) note, dark patterns seem to act upon the choice architecture of the user by modifying the decision space or the information flow. This modification is in many cases accomplished not by modifying, biassing, limiting or distorting the input or the processing of a personal level rational choice but by directly and effectively targeting the sensorimotor possibilities of the user; directly manipulating how and which sensorimotor schemes are enacted. The strength and robustness of certain habits are facilitated/hindered from the very beginning by the (extrinsically regulated) effectiveness of their support structures.Footnote 10 Dark patterns thus become a very basic way in which the locus of control of each particular sensorimotor scheme seems to be displaced –to a higher degree than usual– from the agent to the designed environment, resulting in a diminishment of possibilities of regulation, and ultimately of the performative sedimentation of authentic ways of acting.

A clear and paradoxical example of a dark pattern and its impact on personal autonomy can be time-alerting messages on certain social video platforms. In these cases, you can decide to set up a message that alerts you of the time you have already spent on the app, having it recommend you to take a break when you reach a certain time limit. With this possibility, personal autonomy is in principle enhanced by the platform. However, the only action invited by the message screen (in the form of a single coloured button) is to close the message and continue on the app. In order to leave the app effectively you have to exit it “manually”. Note that exiting the app might not involve the execution of a much more complicated sensorimotor scheme in itself, but it is in fact much harder because it involves a transition from one activity to another, something that is also not elicited or afforded by any readily enactable displayed path. As time goes by, chances are that the habit that will be stabilised will be that of ignoring the message by clicking on the close button, and continuing on the app; ultimately reinforcing the habit of ignoring self-imposed limits to habitual behaviour. A clear case of loss of personal autonomy.

Beyond the single habit, we can analyse the level of activities, or networks of habits. One of the most relevant aspects to highlight at this level is how digital platforms design the enclosure of user activity networks.Footnote 11 Aimed at retaining users engaged within the platform, interface design often translates into designing an immersive user experience. Immersion can be phenomenologically conceived as a feeling of flow and of a certain automaticity in our actions, of not having to think about action-mediating structures, of being hardly distracted from our use and of experiencing little or no resistance or friction. This also relates to the idea of “absorbed” or “skilful coping” in phenomenology, closely tied to Dreyfus’ account of expertise (Dreyfus & Dreyfus, 1980) and drawing from classical phenomenological literature (mainly Heidegger’s and Merleau-Ponty’s legacy). If we take this specifically to technological use, a crucial concept is that of transparency, the “disappearance” of the tool from view (see Van Den Eede, 2011 for a review), which has already posed ethical challenges in the context of digital technologies, particularly in extended mind literature (Clowes, 2020; Farina & Lavazza, 2022; Wheeler, 2019). Transparency so understood relates to using a tool (and, we should add, behaving within a digital environment) automatically and without even being “aware” of it. On our account, these phenomenological notions are related to the intrinsic normative dimension of networks of habits discussed in Section 3.2; stronger networks of habits show a higher coherence between acts, an easier transition from one to the next. As Di Paolo et al., (2017, p. 156) explicitly state, “senses of flow and immersion (…) could be explained in terms of coherent, long-range relations between integrated sensorimotor schemes”. And this is precisely what is favoured in digital platforms by extremely easy-to-use and phenomenologically transparent designs, in what some authors in extended cognition literature would call ‘transparency-in-use’ (as opposed to ‘reflexive transparency’) (Andrada et al., 2022; Clowes, 2020) or ‘transparency-as-automaticity’ (Pérez-Verdugo, 2022).

Rather than achieving a strong and coherent network of habits through the mastery of these sensorimotor interactions by small regulations of the parameters of the coupling, highly designed environments can immerse us directly in pre-defined strong sensorimotor networks that are not the result of our previous regulations and afford little or any future regulations. Once immersed in a coherent network, strengthened by continuous enactment, it becomes harder (although, obviously, still not impossible) to modify the parameters of the interaction, either by further transforming the tool or by using the tool in different ways. What this means, then, is that hyper-designed environments can get us to experience “flow” or “absorbed coping” not as a result of us becoming actual skilful experts (capable of regulations when needed), but by way of carefully designing sensorimotor environments that “pull” us to immersion.Footnote 12 Although both experiences (skilful coping and environmentally-prompted absorbed coping) can feel similar, the difference becomes evident when we expand the temporal scope and see how that experience develops and how a strong asymmetry is established on the ways in which possible virtualities are determined and regulated by the environment (and not the agent).

As we can see, this has implications both for local autonomy, in the sense that the extent to which we can control our behaviour (via a regulation of the coupling) is diminished in particular cases, but also for global autonomy. Through the constant enactment of rigid habits, “pre-defined” by the environment, the resulting networks that constitute the sensorimotor individual, become less authentic (usually followed by feelings of guilt, despair, or sadness).

Before we finish this section, however, we should mention that it is true that we might sometimes want to let technology unidirectionally “pull” us to an experience of flow in a particular use. For example, if we really want to master a language but have difficulties managing to maintain a focus on doing so, we might want to use an app purposely designed to immerse ourselves in its use. And we probably shouldn’t categorise this as an autonomy-diminishing case of technology use.Footnote 13 However, this doesn’t invalidate our discussion around the centrality of having a chance at regulation: we do consider that for this technology not to be autonomy-diminishing, we need to have had at least one broad possibility of regulating our use; the possibility to choose this pre-defined experience over other kinds of experiences with a particular use. Although the regulation of the specific parameters of our use of the app are not a product of our skilful regulative behaviour, at a broader agentive scale, our using of the app in this particular way is a skilful move towards our goal of language learning. As such, certain technologies could be understood to be autonomy-enhancing even if designed to quickly stabilise habits through dark patterns or easy interfaces, provided that we can regulate our use of said technology and provided that broader regulations effectively subserve the intended goal.

4.3 Possibilities for Autonomy Enhancing Digital Interfaces

Technologies are not always autonomy-diminishing by virtue of their being designed by other agents. In many cases, artefacts still allow for regulatory interactions by the agents that use them. This should be the main aim of autonomy-enhancing technological design. The possibilities of dynamic hyper-designability of digital technologies should be exploited precisely to afford novel and creative regulations by the users or participants. We here sketch some ideas that can help guide this design approach.

As defended in Pérez-Verdugo (2022), we consider that true (autonomy-respecting, or even enhancing) technological “extension” should focus on the possibilities for adaptive regulation that the tool can offer, which is what will lead to the mastery of sensorimotor interaction. Following Di Paolo et al., (2017, pp. 156–157), one of the main aspects of the asymmetrical regulation of the sensorimotor network is that it is an adaptiveFootnote 14 regulation, a modulation of the sensorimotor coupling that is sensitive to the viability of the agent (i.e. to the norms of its persistence). Furthermore, it is in (and for) this constant adaptive regulation that the agent actively and asymmetrically “reasserts its own sensorimotor individuation” (Di Paolo et al., 2017, p. 157). In other words, the exercising of our personal autonomy, our active individuation as concrete sensorimotor agents, is a consequence of our adaptivity to the challenges to our viability in novel ways that are –or become– nevertheless still ours.

Thus, if we want any technology to contribute to this process, it should not be a tool prompting static and imposed re-enactments, but rather one that facilitates adaptive enactments. It should increase the agent’s sensitivity and response-ability, rather than the opposite. The focus should be on facilitating local and global autonomy starting from the sensorimotor level by making it easier to regulate asymmetrically and adaptatively the coupling with the environment, including its transformation. And, again, this can be done at the level of the single sensorimotor scheme, at the level of activities and, ultimately, at the level of the identity of the agent.

Regarding the digital design that affects regulatory (a)symmetries in sensorimotor schemes, we find the opposite of dark patterns in the experience of navigating Wikipedia, where perceptual salience is reduced and action possibilities flattened. The content of each page is almost plain text (with styles used with the only effect to highlight section headers) and illustrative images, except for clickable links in blue and underlined text to jump to another page. However, all these links are equally signalled, none is favoured over the other nor over continuing reading the text. Wikipedia is designed so that the reader can choose how they want to explore it; either by reading a text in depth or by clicking on whatever particular link they find interesting. But neither one nor the other option is more or less supported by the environment, contrary to other digital designed environments.

Another key concept that might be useful for autonomy-enhancing technologies is that of customization. While “clear” and minimalist digital designs have been leading the trends for many years, they usually come with a diminishment in options, settings and configuration (or they get hidden them from the users). In many cases, what is lost is a deep possibility of customization.Footnote 15 Most “customizable” options in digital products and services are, unfortunately, very superficial and limited to merely cosmetic aspects, but deep customization should also include more functional and profound changes. Sometimes the simple possibility of arranging and rearranging our digital environments might be what can best lead to an asymmetrical (on the side of the agent) stabilisation of authentic networks of habits. Think of the possibilities of (re)arranging the icons on your desktop; establishing them in a particular (dis)position, deciding which ones will be included and which ones will not be, which will go in the centre of the screen and which on the less salient corners, and the subsequent adaptive regulation of activities that the arrangement of icons makes possible.

We need not abandon some degree of “phenomenological transparency” or easiness of use in itself as a goal (particularly because of its warrants of universal accessibilityFootnote 16). Phenomenological approaches to digital design should precisely focus on finding new ways of achieving an accessible and comfortable experience of use without giving up autonomy. Instead of aiming at completely transparent interfaces, we can try to design rather translucent technologies; technologies that, while allowing for an easy use (especially when tailored to the user’s needs through their continuous use and mastery), do not “disappear from her view” from the start. This (minimal) awareness of the tool doesn’t necessarily need to be cast only through the notion of breakdowns in use, but also through the enhancement of situated awareness (Endsley & Jones, 2012), and it would allow the locus of regulation to be retained by the agent. This idea has already been explored by “unorthodox” interface designers that use friction to enhance mindful interactions with digital technologies (Cox et al., 2016; Mejtoft et al., 2019).

The main obstacle to autonomy-enhancing technology seems to be the overwhelming technical complexity underlying digital technologies, which completely surpasses what we can individually control at each computational step. Because of this, digital interactions need to take place at highly abstract and simplified layers to become effective. Generally, the domain in which the users’ actions are “generated and interpreted” (Winograd & Flores, 1986, p. 165) is restricted to aspects that fall outside the regulation of the functioning of the tool. Aspects of more technical domains are seen as interfering with our current use (and, if they come up in our interaction with the tool, they can be seen as a case of bad design, Winograd & Flores, 1986, p. 165). However, good designs should not be those that merely avoid more technical domains, but rather those that focus on adequately bridging, or even merging, usually separated domains to create new, more encompassing ones where regulation is not felt as interference. If we go back to the level of activities or networks of habits, we can see how autonomy-enhancing technologies might not be those that “immerse” us from the beginning in a feeling of flow, but rather those that allow us to achieve that feeling of flow out of our mastering of the different activities (and of the switching between them). Rather than immersing the users in an action domain that escapes all activities related to the regulation of the tool, we should strive to achieve an expert use that masters the enacting of more regulatory networks of sensorimotor schemes when needed for their adaptive goals. The GNU/Linux operating system is a good example, ranging from the most text-based command line interaction space (Stephenson, 1999) to the most recent and highly configurable yet intuitive desktop environments like KDE (Uzayr, 2022), in which mastery of the navigation of menus, use of shortcuts, etc. and capacity for recursive configuration are well balanced.

We are aware of possible objections to these ideas based on the fact that many of these highly regulable technologies can “scare away” or even exclude people who lack the technical ability to perform these regulations, or who might not feel compelled to do so. We find it crucial to stress here the relational character of personal autonomy, and the need for an adequate social context that can provide support in the development of the necessary abilities (see discussion around “self-trust” and autonomy in Section 2). Even if technologies are creatively designed so that domain-switching allows for a comfortable user experience that can get into deeper regulative domains (for example, in a scalable fashion, as with Mozilla Firefox’s gradual customization optionsFootnote 17), social support for users might also be required, granting them the opportunity to acquire the abilities (and habits) of regulation. Online forums, digital and hacker culture, and digital training and capacity-building are, ultimately, an important part of autonomy-enhancement.

5 Discussion and Conclusions

Throughout this paper, our main aim has been to offer a useful and operational analysis of how technological design at the sensorimotor level, particularly within digital platforms, has an impact on personal autonomy. We have used an enactive framework that grounds cognition naturalising the three dimensions of personal autonomy (structural, temporal and relational). Moreover, the notion of asymmetric modulation of sensorimotor coupling made it possible to account for what it means to control an action in a local manner. A more global normativity emerges for the sensorimotor agent as an integrated web of habits and can be grounded in the continuous enactment of controlled sensorimotor interactions.

Habits' relational nature, relying on support structures intertwining brain, body, and environment, enabled us to analyse the role of the technological environment on the constitution of personal autonomy. We could thus differentiate between natural and technical support structures, providing a first step forward in enactive theorising of how technology entangles agency. Technological design introduced a potential for extrinsic regulation of an agent's habits by other agents’ transformations on her environment. And if a technological support structure is stabilising certain habits rigidly and asymmetrically (without the agent being the locus of control at any point), it might be diminishing the autonomy of the user. On the contrary, support structures that open up possibilities for adaptive regulation can be considered autonomy-enhancing. We can summarise our practical implications as follows:

  • “Dark patterns” and other kinds of strategies that seek to asymmetrically modulate the parameters of viability of habits have the potential of being autonomy-diminishing if not open for regulation by the user.

  • Extremely easy to use apps that stabilise networks of habits without the need for a certain kind of (skilful, agentive) regulatory behaviour by the user also have the potential to be autonomy-diminishing, if used by default to immerse the user in a feeling of flow that isn’t the result of her skilful coping.

  • Offering possibilities for deep, scalable customization and regulation of digital platforms by users can open up the possibilities for autonomy-enhancing.

  • Designing with translucency in mind, rather than (phenomenological) transparency, can help achieve a greater degree of regulability.

  • Phenomenologically inspired designs can be useful for achieving user experiences that can swiftly alternate between different domains, task-oriented and regulation-oriented, without considering them incompatible.

We are now in a position to return to the most classical literature and provide a clear picture of how technology affects personal autonomy. Through focusing on the importance of the relational dimension in a technologically (rather than merely socially) situated way, we have given new relevance to Oshana’s claims about how the environment might alter the possible courses of action of the agent in digital contexts, favouring certain habits over others through designs. Our proposal is strongly relational in that sense, but it also stresses fundamentally temporal and structural aspects. For instance, a temporal focus is in play to account for the potentially autonomy-diminishing effects of immersion when it is asymmetrically prompted by the technological environment; it is not the resulting network of habits that determines why it is autonomy diminishing, but how it came to be stabilised and what was the role of the agent in that stabilisation (something reminiscent of Christman’s remarks on autonomy and personal history) and its future potentiality (o rather lack of it). Furthermore, sensorimotor agents are temporally extended agents -in Bratman’s terms- given the global normativity that arises from their need to maintain their identity (by enacting it). And our analysis of how some of the habits that constitute an agent can go against its overall agential normativity, as they are not regulated according to it but according to the design of the environments where it takes place, naturalises Frankfurt-type structural analyses in the form of conflicting habits.

Moreover, our framework offers a politically relevant analysis of the mechanisms by which certain design practices entrap users within the fabric of the digital support structures of their sensorimotor networks. Activities are operationally enclosed through the immersive architecture of digital platforms and the dark patterning or invisibilization of exit possibilities or activity halting.Footnote 18 The easier it is to enact the habits of using a certain platform, the harder it will be to destabilise that network of habits to leave it. We are now in a position to return to the opening quote of the paper (attributed to Bill Gates): “power in the digital age is about making things easy” (quoted in Moll, 2018). Power, here understood as the heteronomous control of (social) behaviour, is about designing (making) some behaviours more preferable, directed or prominent than others by means of making digital environments easy (or hard) to use towards certain goals, and ultimately “impossible” to escape from.

Our analysis also resonates with Simondon’s claim that “the technical objects that produce the greatest alienation are those meant for ignorant users” (2017, p. 255). “Ignorance” here refers to a lack of awareness of the “operational functioning” (Simondon, 2017, p. 252) of the technical object; and this is an awareness that implies a continuation of the act of invention (or of technical behaviour, in our account). We need to be aware of the operational functioning of the support structures that were designed for us to be able to continue transforming and regulating them. But, within current trends, the fact that massive experiments, data and deep learning techniques are increasingly dominating interface design makes this invention-participation continuation impossible. We are left thrown to a digital world whose structure (starting from the sensorimotor layer) is cognitively impenetrable and does not respond to a human invention that could later be appropriated, continued and reconfigured (not only because of the corporate control of the infrastructures is out of reach, but because of the very deep and complex automatization of design).

Within the digital world, important requirements to counteract this alienation are the practical, legal and technical guarantee to access, modify, share and collaborate on the design and deployment of digital infrastructures recursively, from the level of graphic design, all the way down to computer code, and the underlying stack. Most of these requirements demand that the software be FLOSS (Free/Libre Open Source Software), meaning that users are free: 1) to use the software as they wish, 2) to copy it, 3) to understand and modify it and, for doing so, to have access to the code that specifies what the software does, and 4) to publicly distribute the modifications they might have done (like the GPL license, see Stallman, 2015), extending also these freedoms to the code that is being executed remotely on the servers that provide a particular digital service (as with licences like AfferoGPL). Deeper autonomy-enhancing requirements can involve cryptographic guarantees of the code being executed, like those of so-called Distributed Autonomous Organisations (Hassan & De Filippi, 2021) or the transparency and explainability principle of complex algorithms like AI.Footnote 19 For instance, it has already been argued (Vaassen, 2022) how transparency, understood as the extent to which users are in a position to “grasp the causal explanation of outcomes” (p. 5), is directly related to personal autonomy given its “action-enabling potential” (p. 7).Footnote 20 These autonomy-enhancing conditions can be understood as the depth at which someone (as a member of a technical community) can understand and transform its digitally structured (sensorimotor) world and guarantee that such understanding and transformations hold as intended or agreed upon. They also highlight the ineludible collective character of any autonomy-enhancing technology due to the super-individual nature of its complexity.Footnote 21

A particularly relevant case study around the relationship between personal and collective autonomy in digital platforms is Decidim,Footnote 22 a free software digital platform used by different institutions (governments, NGOs, social movements, etc.) to foster democratic participation. Being a participatory platform for self-governance, it is designed precisely to enhance autonomy not only in the different processes that take place through the platform but also in the autonomous (self-governed and democratic) design of the very platform (Barandiaran et al., 2023). The Metadecidim communityFootnote 23 becomes autonomous at a collective level, designing, programming, regulating and customising the platform to enhance the autonomy of each user. The involvement of the community in the design of the platform can be seen as an example of “participatory design” (see Bannon et al., 2018), highlighting the importance of the experiential knowledge of end-users from the community in designing politically just platforms (Costanza-Chock, 2018). In this sense, an analysis of the personal autonomy of Decidim’s participants would be incomplete without turning to the collective level of the Metadecidim community.

Moving then beyond the sensorimotor domain, there is an exciting road ahead to develop more complex ideas of self-control not only rooted in sensorimotor agency but also in social, collective and linguistic dimensions of agency and intentionality (Bandura, 2001; Di Paolo et al., 2018; Satne, 2021; Tomasello et al., 2005). For instance, Di Paolo et al. (2018) offer an account of how “reflexive” personal autonomy –the possibility of controlling oneself while being aware of doing so, taking a somewhat detached position towards our body– emerges from dialogical situations between a special kind of sensorimotor bodies; linguistic bodies. But this process is a participatory one, where tensions and ambiguities in the messiness of interaction are the central engine that drives more complex intersubjective relationships. The design of digital technologies, many of them specifically made for social interaction, plays an increasingly important role in providing a rich enough infrastructure for shared experiences. In many cases, however, the current design of social media heavily conditions and oversimplifies interactions; sharing, in the end, is not merely a button. Technological mediation then is not only a product of, but also a condition for, these kinds of intersubjective and (pre)reflexive abilities, and it plays a –still largely unexplored– role in the formation of personal autonomy in all its dimensions, including its transindividuality (Simondon, 2017). Moreover, most of the habits that digital platforms exploit are of a social nature. Social conformity is one such habit and it features prominently in social networks’ interfaces (providing actuatable displays of what others have done, like, support, follow, etc.) and it has been widely exploited to stir human behaviour (with the now classical example of Facebook successfully driving thousands of users to vote on the USA elections, see Zuboff, 2019b).

Among the myriad of research directions that the sensorimotor grounding of technology studies and autonomy bring forth, there is then a promising line of theoretical development that connects individual sensorimotor habits with social habits and the sedimented effects they harden in technologies themselves. A research line could be traced from Marcel Mauss’ concept of the “techniques of the body” (Mauss, 1973) to the work of Foucault on technologies and social dispositives (Foucault, 1995) or Bourdieu’s concept of habitus and its contemporary digital re-appraisal (Airoldi, 2021; Bourdieu, 1977; Kadrow & Müller, 2019) through the lens of the enactive approach to autonomy-enhancing technologies. This research line could develop on a fruitful encounter with contemporary social justice theory in a double sense: understanding race and gender as (social) technologies themselves (Chun, 2009) and understanding how structures of social domination are themselves embodied on modern technological (digital or otherwise) devices (Liao & Carbonell, 2023).

Upcoming technological innovations seem to be moving towards even more intimate interface designs. We can find two trends in this direction; on the one hand, virtual/augmented reality seems to expand even more the degree of designability and dynamicity of digital environments,Footnote 24 and, on the other, brain-computer interfaces shorten the distance between agent and environment (see Fairclough, 2023; Friedrich et al., 2021, for analyses of neuroadaptive technologies and personal autonomy). The increasing intimacy of sensorimotor interactions that such new interfaces embody makes it urgent to analyse how they impact sensorimotor autonomy. As we have shown, enactivist theories might offer a fruitful framework to do so.

We can conclude by saying that the identity of a person (hence her autonomy) and that of her world are two sides of the same coin. By (hyper)designing (systematically and experimentally modifying) bit by bit, pixel by pixel, the digital environments that support our personal worlds we are also transforming our identities in deeply asymmetric ways for which our culture and institutions (not to mention our precarious minds) have little resources to compensate for. Identifying and understanding the concrete ways in which sensorimotor interfaces and technological platforms enhance or diminish our autonomy is a first step to counteract this imbalance and regain some personal (and collective) autonomy.