1 Introduction

Approaches in theoretical cognitive science and philosophy of mind that fall under the ‘4E’ umbrella, where various forms of cognitive and mental operation are understood as extended, embedded, embodied, and enacted, are now part of the mainstream (Hutchins 1996; Clark 1997, 2008; Clark and Chalmers 1998; Chemero 2009; Sterelny 2003, 2004, 2010). Indeed, as one recent review noted, arguments that cognition is constituted by, or heavily reliant on, external artefacts and scaffolds in the material world can’t claim a revolutionary status any longer (Carney 2020). With these views now a part of the philosophical establishment, a dissenting choir of voices has called for the recognition of a deeply-rooted bias in the presentations of these ideas. In short, these voices argue that the 4E literature has displayed a consistent tendency for unwarranted optimism in assessing our relationship with the material and technological world, where our dependencies and interactions are too often presented as taking place entirely on our own terms, working only in our favour (For articulations of this criticism, see: Slaby 2016; Aagaard 2021; Spurrett 2024; Timms & Spurrett 2023). These critiques have urged us to make space to analyse the many cases where our material engagements are hostile, oppressive, and harmful.

This paper joins the conversation on hostile scaffolding. Scaffolding here is defined in loose terms as external structures that alter the demands of a cognitive task, and may encompass varieties of embedded, distributed, or extended cognition (Clark 1997; Sterelny, 2003, 2004, 2010). Timms and Spurrett (2023) define hostile scaffolding as a form of what might be termed adversarial scaffolding, deliberately deployed and maintained by one agent at the expense of another agent (pp. 62). This is distinct from scaffolding that might be harmful through plain incompetence, like an inaccurate calendar or a broken alarm clock. Truly hostile scaffolding is marked by there being both victims and beneficiaries (Timms & Spurrett 2023, pp. 62).

More specifically, in this paper I focus on the key notion of “mind invasion”, introduced by Slaby (2016) and also developed and deployed effectively in later work by Spurrett (2024). Mind invasion is presented as a subspecies of hostile scaffolding (Timms & Spurrett 2023), and my view is that there is an important inconsistency between the way the concept is originally developed in Slaby (2016), and the way it’s been subsequently deployed by others, notably in Spurrett (2024) and Figà-Talamanca (2024). I argue that the concept of “mind invasion” should be used more narrowly, and that further conceptual resources are required to describe cases that, I argue, exhibit a more insidious character.

Specifically, I argue that examples like the hostile scaffolding of a casino environment, which is a key example in Spurrett (2024), and adaptive technology (Figà-Talamanca 2024), fail to meet the criteria for mind invasion, and actually exemplify a different class of case. Drawing on the work of Harry Frankfurt, I introduce the notion of the ‘techno-wanton’, and argue that many cases of hostile technological scaffolding are better conceived of as potentially diminishing personhood via a degradation of the dynamics between preferences and volition. Roughly speaking, while mind invasion describes a process whereby truly hostile scaffolding invades the mind of an agent against their will (and in contravention of their own preferences), techno-wantonness describes cases in which technology panders to a subset of an agent’s existing preferences to undermine hierarchical structures of agential control. I argue that the key examples of casino scaffolding and adaptive technology are forms of techno-wantonness and not mind invasion. Mind invasion itself is a useful concept, and so a sharp distinction is necessary for thinking clearly about developing technology and formulating the strongest critiques.

In Sect. “2” I unpack the relevant recent work on hostile scaffolding, highlighting key points of critique against the traditional picture that emerges from established approaches in, especially, embedded and extended cognition. In Sect. “3” I offer my critique, arguing for an inconsistency in the use of “mind invasion”, and then using Frankfurt’s work to introduce the “techno-wanton.” Specifically, I utilise Frankfurt’s figure of the ‘wanton’, a theoretical entity lacking a reflexive volitional structure, to highlight how some technologies elicit temporary wanton-like characteristics in their users. In Sect. 4, I introduce examples of adaptive technology, focusing on the case of social media algorithms and drawing from recent work in cognitive science to sketch some of the key features of technology that threatens to induce states of techno-wantonness, including low-friction engagements and endless novelty.

2 Section One: Hostile Worlds

2.1 A Dogma of Harmony

The volume of literature falling under the umbrella of ‘4E’ cognition that sets out to characterise our relationship with technology and material space is enormous, casting its net over such material resources as computer interfaces (Kirsh and Maglio 1994; Kirsh 2019), laptops and smartphones (Chalmers, 2019; Carter and Palermos 2016; Bruineberg and Fabry 2022), navigation systems (Wheeler 2019; Clowes 2020), humble notebooks (Clark and Chalmers 1998), silicon brain implants (Clark and Chalmers 1998; Clark 2008a, b), music venue design and technology (Wheeler 2018), food preparation practices (Sterelny 2010; Greenwood 2015), and general material and cultural niche construction (Sterelny 2003, 2004, 2010; Veissière et al.2020; Constant et al. 2022). These contributions encompass a wide array of cognitive operation and mental phenomena including belief and memory (Clark and Chalmers 1998), affectivity and emotion (Krueger 2014; Colombetti and Roberts 2015; Krueger and Szanto 2016), our sense of self (Heersmink 2020), consciousness (Kirchhoff and Kiverstein 2019; Vold 2020), and self control (Heath and Anderson 2010).

Although the 4E literature has revolutionised the way we conceive of cognitive creatures and mental life, providing many valuable insights, it has more recently been the target of a specific kind of criticism. Aagaard (2021) captures things succinctly, drawing attention to what he calls a “dogma of harmony” pervading the 4E paradigm, through which:

4E scholars tend to paint an overly idealised picture of human–technology relations in which all entities are presumed to cooperate and collaborate…this dogma of harmony is not necessarily an incorrigible fallacy, but may be alleviated by broadening the analytical spectrum of 4E literature (pp. 166).

For Aagaard, this bias toward viewing our couplings with technology as always cooperative and beneficial isn’t necessarily baked into 4E notions of extension or embeddedness, but it does manifest as a theoretical a priori, exemplified in the language pervading much of the work on extended cognition, especially (according to Aagaard) that of Andy Clark. An exploration as to why this theoretical a priori is so pervasive is beyond the scope of this article, although it should be said, in passing, that Aagaard’s suggestion (drawing on a critique by Selinger and Engström (2007) that Clark’s work is driven by a kind of ‘quasi-religious’ post-human desire to privilege human consciousness and leave nature ‘ontologically redundant’ (Selinger and Engström 2007, pp. 568), seems doubtful. The target of Selinger & Engström’s critique for example, Clark’s 2003 book ‘Natural Born Cyborgs’, contains an entire chapter dedicated to exploring multiple key areas of concern, including some that seem to lay at the heart of Selinger & Engström’s critique.Footnote 1

Nevertheless, Aagaard rightly draws attention to the 4E literature’s widespread focus on specific “tasks” and how users are often depicted freely selecting from a “candidate pool” of cognitive resources (Aagaard 2021).Footnote 2 The canonical example of extended mind - intended as an intuition pump to counter our “bio-chauvinistic influences” (Clark, 2008a, b, pp. 114) - found in Clark and Chalmers (1998) allegedly exhibits this tendency. We’re asked to imagine a man, Otto, who has mild Alzheimer’s disease. In order to form useful beliefs about the location of a museum he’d like to visit, he refers to a trusty notebook, in which he writes down all kinds of useful information, meaning that the notebook serves the same functional capacity as biological memory (Clark and Chalmers 1998, pp. 12). Otto has freely selected to use his notebook, trusts it, and engages with it on his own terms. Indeed, it’s these notions of ‘trust and glue’ that form the first constraints on what should, or should not, count as mind extension.

This focus on “users” selecting and engaging with “resources” forms the basis of another, earlier, critique, by Slaby (2016), who fixes his sights on an ‘oft-recurring scheme - particularly prevalent in Andy Clark style extended mind theory’ (pp. 5). Slaby describes this oft-recurring theme as a ‘blindspot’ and a ‘onesidenedness’, a shortsightedness he dubs the “user/resource” model, in which we assume the supreme sovereignty of the “emancipated” user who freely chooses the what, when, and how of their technological and material engagements, a kind of “libertarian-friendly, post-1980, California style” agent, free of the normative forces that actually complicate and muddy our complex relationships with technology and materiality (Slaby 2016, pp. 6). This optimism, it’s argued, comes at the high price of turning a blind eye to the fact that so much of our cognitive scaffolding is imposed upon us in politically inflected ways: those infuriating self-service checkouts, for instance, exist not because I, as an emancipated cognitive agent choose them, but because they are the inevitable result of the capitalist drive to reduce costs and maximise profit. As Selinger and Engström (2007) put it:

The contexts in which choices are made and socially empowered agency is most decidedly at play cannot be reduced to “coalitions of tools” and their/our “mental profiles”—concepts that are ill-equipped to capture the collective economic and political forces that circumscribe and incorporate our “mental profiles” and that selectively produce and utilize some rather than other tools (pp. 571).

The complaints seem to be gesturing toward the same fundamental thought: namely, that our engagement with technology and materiality are far from troubled when it comes to questions around freedom and agency, specifically in the context of how we choose to engage with technology and how it restructures our experience of the world. And while it is true, as Clark (2003) points out, that none of our engagements in the non-scaffolded world are free from these muddying and complicating factors either, these critics are keen to emphasise that the grounds of our scaffolded engagements are never neutral, and their effects are often far from benign. According to these concerns the kinds of scaffolding we endure on a day-to-day basis are potentially a far cry from the mind-enhancing engagements that often sit front and centre in the 4E literature. How the technology we use to scaffold our cognitive and affective lives will impact our wellbeing should concern us, especially, I will argue, in the case of emerging technologies, powered by evermore advanced forms of artificial intelligence. Although, as Clark notes, the wider non-scaffolded world is permeated by the same economic and political forces, when it comes to emerging technology we have a crucial opportunity to make decisions about design and implementation. We must be sensitive to the undermining (deliberately or not) of agency, the harms that may cause, and to the political and economic aspects that drive the design and deployment of such technology, and the kinds of power they entrench or centralise. Part of the motivation for this paper is to contribute to a theoretical base to do this more effectively.

2.2 Mind Invasion

Slaby (2016) draws on work in distributed cognition, especially within richly structured social environments, to flesh out an account of potential harm or hostility as a form of what he calls “mind invasion.” Slaby’s focus is institutional social domains - like corporations or universities - that contain powerful ‘formative structures’, such as architecture, technology, and norms, that “help sustain recurring practices, modes of interaction and relational dynamics” (pp. 8). Slaby describes a process of embedded normativity (see Guènin-Carlut and Albarracin 2023; Guenin-Carlut et al. 2023), whereby material and social organisation is not ontologically distinct from the rules and norms they embed and uphold (Slaby 2016, pp. 8). Slaby describes the myriad ways that these richly scaffolded social domains “hack” individuals’ affectivity, and wherein those who have been moulded and shaped over time become part of the active scaffolding that acts so effectively on any novice initiate.

To be very clear, Slaby’s notion of mind invasion explicitly and unequivocally refers to cases where there is a necessary kind of unwillingness (pp. 2). Mind invasion is a form of framing, moulding or sculpting embedded within material, technological, and social scaffolds that degrades the initiate’s ability to pursue “self-avowed goals” and preferences (pp. 6). This fundamental hallmark of mind invasion, namely the unwillingness of the “victim”, is crucial to the argument. Within this kind of social-institutional machinery, individuals are oppressed into “performing” the embedded norms. For example, while one might, after some time in a new workplace, come to perpetuate and reinforce certain hostile norms, one is typically doing so performatively; the norms of that space were never really our norms (Slaby 2016, pp. 8). Mind invasion, it seems to me, speaks exclusively to cases where an agent is not freely seeking material couplings to “enlarge their mental repertoire”, but to cases in which an agent’s mind is “hacked” in ways running counter to their previous orientations, and in ways detrimental to their personal flourishing or long-term happiness (Slaby 2016, pp. 2). By way of example, Slaby helpfully turns to cases of “presence bleed” in the new digital corporate landscape wherein technology and culture facilitates the bleeding out of work time into non-work domains, like home, or the beach. Technology like email, smartphones, and work-oriented social media platforms like Slack or Microsoft Teams mean that we are increasingly unable to disconnect from work. Crucially though, this hyper-connectedness is not to be explained only in terms of our user-resource couplings with specific bits of technology, but also through:

a far-reaching ‘mental infrastructure’, with complex patterns of affect and affective relations that play important operative roles in the universe of contemporary white collar work (pp. 10).

Slaby righteously excoriates tech-facilitated toxic workplace culture that leaves people feeling pressured, overwhelmed, and guilty, trapped in “pervasive forms of workplace competition brought about by advanced performance metrics as part of new regimes of data-driven management” (pp. 10). This is hostile situated affectivity, whereby we allow “the frantic halo of the information workplace to seep into one’s intimate sphere of existence” (pp. 10). I agree with Slaby’s assessment that such cases of brutally hostile situated affectivity (scaffolded with modern technology) provide compelling counterpoints to a simplistic user-resource model. To drive this view of mind invading affective scaffolding home, consider perhaps the most abrasive of all socio-cultural-material scaffolds: prison. It seems to me that if anything is a form of mind invasion, in the sense of imposing a construct of cruel scaffolding, then prison is. People are typically sent to prisons very much against their will, and the norms of prison, including routine exposure to serious violence (Worrall and Morris 2012; Quandt and Jones 2021), exposure to hard drug use and new social hierarchies represented by gangs and prison authority figures (Worrall and Morris 2012), are more hostile, harmful, and oppressive than even the most toxic corporate environment. Indeed, prisons themselves, in their very conception and design, are intended to run counter to a person’s prior orientations.

David Spurrett too has argued that our engagements with material and technological scaffolding are very often less amicable than the broader 4E literature would suggest, and has provided compelling examples of “hostile and oppressive affective technologies” in what he calls a “manifesto and call for further work” (Spurrett 2024). Spurrett is interested in how certain forms of technological and material scaffolding can sculpt our felt, embodied stance toward the world, leading to changes in the way we are motivated to act and to how we evaluate the options open to us. Of course, the world in general always leaves us feeling some sort of way; it’s impossible to not have some kind of embodied stance toward the world, something akin to what Matthew Ratcliffe has called “existential feelings” (Ratcliffe 2013, pp. 214). To distinguish between proper technological affective scaffolding and the ‘mere environment’, Spurrett draws on work by Giovanna Colombetti and Joel Krueger, who argue that ‘trust’, in the sense of perceived reliability and consistency of function is what distinguishes affective technology (Colombetti and Krueger 2015). This is in line with a majority of the literature on human-technology interaction that defines trust in this ‘reliabilist’ way (e.g. Lee and See 2004; Schoeller et al. 2021). Affective technologies, then, are technologies that serve as affective scaffolding in the sense of being reliably available and consistently functional in effectively shaping our embodied, affective orientation to the world: scaffolding that shapes how we feel, and therefore how we are likely to choose and act.

2.3 Casinos: Mind Invaders?

Spurrett provides several compelling examples of hostile affective technology, including cigarettes, but perhaps most interesting are the complex, highly technologized and deeply curated environments within large Las Vegas casinos, in which a lack of visible clocks and windows, free drinks, easy bank transfers, bright artificial light in place of windows, and high-speed, fluid and adaptive mechanisms all conspire (very deliberately and reliably) to keep gamblers feeling like they want to keep on gambling (Timms & Spurrett 2023; Spurrett 2024). In his assessment of casino environments as hostile affective technology, Spurrett (2024) adopts the notion of mind invasion from Slaby, asserting with confidence that “the highly scaffolded regular gambler exemplifies mind invasion” (Spurrett 2024). It’s my view that this adoption of mind invasion is mistaken.Footnote 3 Not only does it not fit with Slaby’s original definition, but it broadens it so far as to obscure some of the insidiousness of emerging hostile affective technology. To obscure these more insidious operations will make it harder for us to properly criticise and challenge them.

While Slaby and Spurrett therefore converge upon mind invasion as a useful concept for thinking about the way we’re entangled with affective material, social, and technological scaffolds, Spurrett’s use of the concept seems to significantly relax the requirement for an unwilling agent. Although in both the prison case and the casino case there is, in some strong sense, a degradation of agential control, in the casino case there is also a strong sense in which that scaffolding is only effective precisely because it facilitates the satisfying of preferences that really are the agent’s. Feelings of stress and inadequacy may well emerge from toxic workplace norms facilitated with digital technology, and from the strong compulsion to keep pouring money into a slot machine scaffolded by the material interiors of casinos. But in the casino case, it’s plausible to say that (at least in the beginning) gamblers really want to gamble in a way distinct from how people who go to prison typically don’t want to go to prison. To speak in terms of affectivity, the felt experience of going to prison will be, at least for most people, very different to that of willfully entering an exciting casino floor.

This distinction forms the basis of the account provided in the next section, which argues that far from invading our minds with alien preferences and norms, some hostile scaffolding instead aims to intervene on the level of the dynamic between what Frankfurt (1971) labels first-order preferences and second-order volitions. Rather than battering down the gates of an unwilling mind, they aim to degrade our volitional control by appeasing and perpetuating ‘lower-order’ impulses and cutting the control cords with ‘higher-order’ reflection on preferences. This conception, I contend, does greater justice to the guiding themes of the broader 4E research program, in that it rejects the idea that there is an invading force out there imposing on my mental life in here, and instead blurs the boundaries, suggesting how technology can mediate our experience by tapping into our own desires (see also Verbeek 2005, 2008).Footnote 4 Indeed, this is certainly more in line with concerns of Selinger and Engström (2007), and is nicely articulated by Mark Coeckelbergh, who presses us to see how “technologies and media don’t exert their influence in the form of…gods, overlords, or monsters; technologies such as AI shape our narratives by being interwoven with our lifeworld” (Coeckelbergh 2022, pp. 116).Footnote 5 In what follows, I introduce the figure of the techno-wanton and contend that this notion speaks more clearly to this interwovenness and also allows us to think more carefully around paths forward in terms of ethical design principles.

3 Section Two: Preferences, Persons, and Techno-Wantons

3.1 Frankfurt and the Concept of a Person

In his 1971 paper “Freedom of the Will and the Concept of a Person”, Harry Frankfurt introduces a condition for personhood resting on a distinction between what he calls “first order and second-order desires.” Put simply, a second-order desire is just a desire to have a particular first-order desire. For instance, I may desire to stay home and watch television (first-order desire), but I could also desire that I actually wanted to go to the library and work (second order desire). When a person wants a certain desire to be their will, which is to say we desire it to be the case that we are moved to act by a certain desire, we can call that a “second-order volition.” Though Frankfurt allows for the “logical possibility” of having second-order desires absent of any second-order volitions, he says that it’s unlikely, and contends that “persons” are characterised by them having second-order volitions (Frankfurt 1971, pp. 11). This means a reflexive structure of desires wherein an agent assesses the appropriateness of their preferences, often desiring to be moved in this way or that, and taking a stance toward our preferences in which we ask questions of ourselves, offer entreaties, make arguments, and so on (Dennett 1976).

So, in the above example, even though I’m fulfilling my first-order desire to sit and watch television, it’s likely that I actually want my second order desire to move me to work: I would prefer the structure of my motivating desires to be different. Persons, on this view, are understood as having this more complex economy of preferences and volitions. In contrast to fully-fledged persons, with their economy of first and second-order desires and volitions, is the “wanton”, an individual characterised by a lack of second-order volition, enthralled to a shallow economy of only first-order desires (pp. 11). Were I a wanton in the above example, I would satisfy my preference for television watching without ever wondering whether that’s the best desire for me to have. To illustrate the point, Frankfurt describes the wanton addict, who,

If he encounters problems in obtaining the drug or in administering it to himself, his responses to his urges to take it may involve deliberation. But it never occurs to him to consider whether he wants the relations among his desires to result in his having the will he has (pp. 12).

Crucially, nothing about the wanton precludes rational deliberation in the pursuit of the things they want. A wanton will employ all of the usual rational machinery in pursuing ends, it’s just that the wanton simply doesn’t care whether the will to pursue those ends is something that itself is worth desiring.

The wanton addict is contrasted with the unwilling addict who has contrasting desires to both take the drug and to not take the drug, and, crucially, is not neutral in terms of which set of desires they want to actually be their will. The unwilling addict desires it to be the case that the desire to not take the drug were their will. However, the unfortunate realities of addiction mean that in most cases they behave in a way indistinguishable from the wanton addict. To be clear, Frankfurt believes that this interplay between first and second-order desires, and the force of second-order volitions in reflecting on and shaping the kinds of preferences we think are worth having, is one condition for personhood, writing “one essential difference between persons and other creatures is to be found in the structure of a person’s will” (Frankfurt 1971, pp. 6). In the concluding section I return to this question of personhood, but for now, the claim is that Frankfurt’s account provides grounds for thinking that were this reflexive economy of desire and volition to break down, then we would have grounds for thinking that so too does the completeness of an agent’s personhood.

3.2 Techno-Wantons

I contend that the real harm of some forms of technological and material scaffolding, including casino interiors, is not best conceived of as mind invasion. “Mind invasion” is simultaneously too strong, and too weak. Firstly, it’s too strong because people who enter into casinos to gamble do so because they have at least the first-order desire to gamble, and very often will, at least at first, have pro-gambling second-order volitions. But even in the case of wanton gamblers the first-order desires are very much their own desires, and have not been imposed by an invading material and cultural casino structure.Footnote 6 This is crucial; recall, in Slaby’s formulation of mind invasion, minds truly are victims…“domain naive” individuals “hacked” to comport themselves in ways that run “discernibly against” their prior orientations (Slaby 2016, pp. 2). So even in the case of a wanton gambler, it’s wrong to say that the casino has invaded their mind. In many cases though, perhaps most cases, people first entering a casino after touching down in Vegas may well be willing gamblers in the sense that they considered whether gambling in Vegas was something that they should really want to do, and genuinely reflected on whether it was wise to go gambling instead of, say, taking their family to Disney World. In this sense they have genuine second-order volitions about gambling, they are reflexive and want their desire to gamble to be the desire that moves them to act, at least for the time being. In these cases we have even less grounding for seeing them as victims of mind invasion. It seems to me, whatever the status of the gambler, in no sense has their mind been invaded by outside forces in a way hostile to them at the outset; they formed the desire to gamble, free of the casino’s scaffolding. As we’ll see, though, this isn’t letting casinos off the hook, quite the contrary in fact.

Second, the notion of “mind invasion” is too weak, in that it doesn’t quite capture the insidiousness of some truly hostile forms of technology. My position is that this class of hostile technological scaffolding literally erodes our personhood. Suggesting that we are victims of mind invasion because we are in a technological environment that reliably sculpts affect such that we behave a certain way, fails, I think, to do justice to the depth and seriousness of the harm. Genuine cases of mind invasion, such as prison, plausibly leave intact the deeper economy of preference and volitions: I may temporarily prefer to engage in prison culture because it’s safer to do so, but my second-order volition to remain separate from those practices could well remain in place. As Slaby notes, within a mind-invading institution, our compliance constitutes a species of performativity. Indeed, my second-order volition to comport myself in this way or that may even be strengthened within a mind-invading environment in a way reminiscent of Jean-Paul Sartre’s declaration that an occupied people can find the greatest freedom in resistance carried out by the will (Sartre 1944). In cases like the casino, what’s happening isn’t a form of invasion from outside forces upon an unwilling individual that is resisted in thought and spirit. Rather, such scaffolding learns to seduce the part of us that is willing, it learns about us, and calls to our preferences in ways that undermine the crucial dynamics that characterise our exercise of will. In cases like the casino case, we should think of the hostile scaffolding as inducing techno-wantonness: the techno-wanton is an agent who, for a period of time, displays key features of wantonness in the Frankfurtian sense. In other words, Techno-wantonness is a technologically scaffolded state that erodes a person’s ability to maintain a reflexive and dynamic economy of first-order desires and second-order volitions.Footnote 7

To be clear though, in relation to Frankfurt’s work, I am not claiming that techno-wantons are literally wantons in the technical sense used by Frankfurt. Rather, techno-wantons are wanton-like in that they temporarily display a (scaffolded) breakdown in volitional structure that results in a state of wantonness. For Frankfurt, the distinction between persons and wantons is never explicitly about control or willpower, but about the capacity for having second-order volitions. Thus, Frankfurt identifies young children and non-human animals as two classes of wantons, but not people who might temporarily exhibit a contextual or local loss of willpower. Oshana (1998), for instance, takes ‘wantonhood’ to be neither a temporary or local condition but a “relatively stable condition of character” (pp. 262). Techno-wantons, then, are distinct from the Frankfurtian wanton, first in that their state of wantonness is temporary, meaning that even those of us with the strongest will power might be afflicted by periods of techno-wantonness. And second, in that their state of wantonness is localised in relation to a specific form of technological scaffold, which is something that is not a feature of Frankfurt’s account. Indeed, Frankfurt’s original account of wantonness naturally lends itself to an internalist reading.Footnote 8

Consider again the large Las Vegas casino. Spurrett makes the distinction between gamblers who aren’t being harmed because they find “the odds offered an acceptable price for an enjoyable activity”, and those with gambling problems who find it difficult to stay within budget or to stop gambling when things deteriorate. Indeed, Timms and Spurrett (2023) offer a detailed account of the ways in which gambling machines “gamify” the experience for users, using material features to deliberately erode a gambler’s ability to control how long and how frequent their gambling endeavours last. According to Spurrett (2024) though, the feature characterising the latter case, and what might describe a potential shift from the former to the latter, is a process of mind invasion facilitated by affective technological scaffolding. But it’s arguable here that by labelling such technologies “mind invading”, we slip back toward something like the user/resource model (albeit a less Panglossian version), according to which there’s a clear distinction between the user (with their naive preferences) and the technology, which sets out to impose its own preferences. In reality, the relationship between these technologies and the user is more sinister, more complex, and blurs the lines of agency. This, I submit, is more in line with the often very complex phenomenology of addiction, that exhibits an awareness of conflicting desires but an accompanying loss of control (Miller et al. 2020). According to the techno-wanton view, in both cases the gambler is motivated by their own preferences. In neither case are they domain naive as in the case of Slaby’s corporate intern. In order for this kind of hostile technology to work, then, it must, at least in some sense, know us: it must know our preferences and ‘care’ about those preferences, nurturing them and channelling them, carefully and deliberately scaffolding them in potentially highly technologically impressive ways that dampen the reflexive relationship with higher-order volition. In the next section, I look in more detail at the ways technology induces this effect.

At this point, we can draw a useful distinction. In Slaby (2016), the examples of mind invasion are more compelling and, I think, really do count as mind invasion, because they are examples of affectively scaffolded environments that really don’t care about a person’s preferences from the outset. Imposing technologically-scaffolded workplace affective infrastructure that inculcates a feeling of guilt for not being constantly reachable, or a prison that erodes any sense of shock or empathy toward scenes of violence, victimisation, and drug abuse, constitute something importantly different from technology that wants to learn what you like so that it can affectively (and effectively) manipulate you into staying engaged. In the following section I turn to real-world examples of highly adaptive technology to highlight some of the key technical features that characterise truly techno-wanton technology.

4 Section Three: Friction, Novelty, and Internal Triggers

4.1 Adaptive Technology

I closed the last section by suggesting that there’s a key difference between the examples Slaby (2016) gives, of hostile work environments that really do invade our minds with sets of preferences and expectations that aren’t ours and that we don’t want to adopt, and another class of environmental and technological scaffolding that aims to do something more insidious. I argued that what characterises this latter set of cases is the scaffolding’s ability to learn about and pander to our own first-order preferences, thereby limiting the efficacy of second-order volitions. In this section, I’ll refer to such technologies as “adaptive technology” or “adaptive scaffolding”. One form of adaptive technology (in the form of social media recommendation algorithms) is the focus of Figà-Talamanca’s (2024) article on mind-invading technology and autonomy, and while I argue that adaptive technologies shouldn’t be accounted for using “mind invasion”, Figà-Talamanca holds that the operation of an algorithm that feeds content to a user in order to engineer affect and drive engagement is an “undisputable…case of mind invasion” (Figà-Talamanca 2024, pp. 8). In this section, I focus specifically on adaptive social media algorithms, offering a detailed account that disputes their description as “mind invading.”

Adaptability doesn’t necessarily entail hostility; it’s conceivable for adaptive technology to be entirely benign or hugely beneficial (See, for example: White and Miller 2024; White and Hipólito 2024). But it’s also possible for adaptive technology to be hostile. Figà-Talamanca (2024) is right to see that the harms posed by adaptive scaffolding aren’t just different from non-adaptive technology in their quantity, but in their kind. What imbues such technology with this kind of peril is its ability to learn about its user and then curate the affordances available to an agent (in much the same way that a highly intelligent and manipulative human abuser might). While casino interiors are adaptive scaffolding in the sense that their slow material structure (architectural and interior design) and social features (staff bringing free drinks to players) pander to user preferences, they also now employ fast-adapting digital technology, and it’s this technology that poses the most risk of techno-wantonness.

Such digital smart technologies use sophisticated techniques to learn about their user, gathering potentially vast quantities of data, and then using this learning, along with the affordances of the user interface to form tight reciprocal couplings with the user (Zanker et al. 2019).Footnote 9 A standard existing example are sophisticated algorithms that sculpt the flow of available content on social media platforms like Youtube and Facebook. These adaptive capabilities sit at the forefront of many emerging technology systems, gathering data through our actions, monitoring our clicks and measuring how long we linger over a piece of content. We should note, though, that current research is looking at ways adaptive technology could go much further. For example, the potential role that EEG can play whereby real-time analysis of brain activity is translated into an operationalizable schema used by the system to adapt to user preferences, such as in the case of real-time video game difficulty adapting to the measured boredom level of a player (Ewing et al. 2016). Similarly, the game “MindLight” uses biofeedback from wearable technology to modulate in-game features. The amount of light a player has in the game (and thus the ability of the player to see what they’re doing) is directly linked to their measured level of real-world anxiety, prompting the player to make a conscious effort to mindfully control their real bodily state. “MindLight” has been proven effective in reducing anxiety in children (Schoneveld et al. 2016, 2018). As technology facilitating fast biofeedback becomes better and less cumbersome to use, and as the translation process from physiological data to operationalizable schema becomes sharper, these kinds of fast adaptive systems will become more common, and more effective in achieving their ends, whatever they may be.

4.2 Key Hallmarks of Techno-Wanton Scaffolding

In the rest of this section, I explore one real-world example of adaptive technology: social media algorithms. This example, in the context of recent cognitive science research, will help clarify the techno-wanton concept, and motivate its adoption. Engineers have worked extremely hard to develop algorithms and accompanying interfaces that fit content to user preference in ways subtly tweaked to maximise engagement. Social media platforms are, undoubtedly, a highly effective form of affective technology, and for many of us can often feel harmful. It’s common for people to spend more time than they would like scrolling on their phone and over the longer-term, social media has been linked with various forms of psychological and emotional harm, including depression, anxiety, and body dysmorphia (Lin et al. 2016, Twenge 2017; Tiggemann et al. 2013, 2014, 2019; Tiggemann and Brown 2018). Recent work in theoretical cognitive science has proposed explanations of how these effects arise, suggesting that some platforms could plausibly undermine the ability of some users to form accurate and useful mental models of themselves and their environments (White et al. 2024). In his discussion of social media algorithms, Bruineberg (2023) is concerned with the “vulnerabilities of the embodied mind situated in an increasingly designed and digital environment” and provides a vivid and compelling account of what he calls “adversarial inference” based on recent theoretical developments in neuroscience (Bruineberg 2023, pp. 1).

Adversarial inference models these algorithms as agents in their own right, observing the actions of their user, such as clicking, scrolling, or lingering on a specific bit of content, and then generating content predicted to generate further engagement. If the actions of the algorithm fail to drive more engagement, then it tweaks its predictions slightly and tries again, until it achieves the kind of observable user action that it wants. In cases where the goal of the user and the algorithm are misaligned - like when I want to limit my screen time in the evening but the system keeps generating shiny suggestions - then we call that dynamic one of “adversarial inference” (Bruineberg 2023, pp. 6). When coupled with the above observations about social media’s potential to warp our ability to model the world effectively, and the purported links with psychopathology, it’s clear that such engagements count as hostile affective scaffolding. By looking closely at Bruineberg’s account of adversarial inference though, we can recognise key hallmarks of technology that threatens us with techno-wantonness, and push back against the characterisation of such systems as mind invasion.

The first hallmark is that potentially hostile adaptive scaffolding is underpinned by devices that offer frictionless, or very low-friction engagement (Bruineberg 2023, pp. 9). Smart phones allow for switching between apps with just a couple of swipes, and in-app refresh features often require just a downward swipe, eerily reminiscent of pulling the handle on an old-fashioned casino slot machine. Devices that threaten to engender techno-wantonness aim to reduce friction associated with preferences satisfaction, making it easier. This is in contrast with cases of genuine mind invasion: where adaptive technology removes friction, prisons and the like introduce it through embedded normativity: clocking in and clocking out, prohibition of practices and goods, close monitoring, and so on. Prisons, and indeed most corporate work environments, build in measures that aim to stop real preference satisfaction. For example, work environments might block certain websites to keep employees on task. This is a particularly poignant departure from cases Bruineberg mentions, wherein an agent trying to stay on task and avoid techno-wantonness might deliberately install their own site-blocking software to deliberately introduce friction. To drive the point home: mind invasion is, generally speaking, perpetuated and enforced through a strategic increase in friction, whereas techno-wantonness is engendered through a strategic reduction in friction, relative to an agent’s preferences.

The second hallmark feature of technology aiming to induce techno-wantonness is a drive to feed users with endless novelty. Bruineberg, as well as the work on social media by White et al. (2024), mentioned earlier, build their accounts upon an emerging, empirically-grounded, picture of the brain as fundamentally driven to resolve uncertainty (see Hohwy 2013; Clark 2013, 2016). According to this account, for cognitive systems like us, novelty has an intrinsic value as it represents a signal for learning opportunities and epistemic gain (Schwartenbeck et al. 2013; Friston et al. 2016; Miller et al. 2020a, b). Moreover, the ongoing resolution of uncertainty is thought to underpin embodied feeling, with reductions in uncertainty relative to expectations giving rise to valenced affective states (Van de Cruys 2017; Kiverstein et al. 2020). In other words, part of the affective force that characterises adaptive technologies like social media algorithms is that they provide a limitless source of (carefully curated) uncertainty to resolve, which provides a stream of affective feedback. As Bruineberg puts it, social media platforms provide “practically infinite amounts of easily accessible novelty, highly variable reward, and content that taps into different motivations” (Bruineberg 2023, pp. 9). To be clear, the novelty and uncertainty here is arising through platform features like a limitless content feed and notifications about messages and so on. It’s here that a sinister link (well established) between social media and casinos (in their techno-wanton ways) can be drawn: both are deliberately designed, using exactly the same principles grounded in the science of addiction. Anticipatory states relating to an uncertain cued reward are highly arousing, stimulating dopaminergic responses in the brain (Hegarty et al. 2014). It’s these exciting anticipatory states that hook gambling addicts, rather than any extrinsic reward per se (Van Holst et al. 2012; Linnet 2014). In short, techno-wanton inducing technology weaponises uncertainty in strategic ways, while mind-invading scaffolding is relentless in its routine and predictability.

One significant advantage of this approach is that it can make more sense of cases wherein our engagement with scaffolding is driven by internal drives. Conceptually speaking, this is more difficult to do using the language of “invasion.” I think that, generally speaking, cases of techno-wantonness will often see the formation of strong internal triggers and motivations. I’m drawn to spend time on Youtube not because the environment or established social norms coerce me to, but because Youtube itself is so good at making apt recommendations and is structured and designed such that once engaged, it’s harder to maintain and exercise a deep, reflexive economy of preferences. The claim here isn’t so crude as to deny that bonafide forms of mind invasion won’t ever lead to the formation of internal triggers. Slaby is clear that being in a mind-invading space is to experience a deep shift in affectivity, and that shift in embodied orientation will mean a shift in what motivates us to act. But the notion of techno-wantonness seems better placed to capture cases where my engagement is more often than not driven by internal motivations.

In the analysis of this section, I hope to have clearly laid out some key features of techno-wanton inducing scaffolding: novelty, uncertainty, frictionless use, and internal motivations lead to a degradation in communication between first order preferences (I want to check my social media), and second-order volitions (maybe I shouldn’t do this for two hours). I also hope to have shown persuasively that Spurrett’s identifying of casino environments and Figà-Talamanca’s identifying of social media as cases of mind invasion represents a conflation of mind invasion and something quite different. I propose understanding this class of cases as techno-wantonness. Techno-wantons haven’t had their mind invaded. They’ve simply been presented with a technology that is so effective at pandering to their preferences that their capacity to exercise a reflexive economy of desire and volition, something that defines us as persons, is enfeebled.

5 Section Four: Addressing Some Objections

5.1 Negative Affect is Surely Mind Invasion

One might push back against this account, arguing that it underplays the importance of affectivity. For example, in his account of mind invasion Figà-Talamanca (2024) focuses on how these systems drive engagement through the engineering of affective responses such as anger or outrage, which are Prima facie negative affective experiences. If platforms are engineering affectivity to drive engagement using negatively valenced states, then how could they be characterised as techno-wantonness, as it seems implausible that users have an existing preference for negatively valenced affective states. In this sense, then, we might think that this affective engineering constitutes mind invasion. However, it’s too simplistic to conclude that just because an affective state is negatively valenced, that it isn’t in keeping with the genuine preferences of an agent. While we might think of outrage as negative, it’s also a signal for an opportunity to put things right, to engage and to receive valuable social validation from our peers. Moreover, users may even deliberately seek out ostensibly negative emotional experiences in order to satisfy other needs. Research into horror media, for example, has shown how people actively seek out terrifying films and games in order to reduce general anxiety, to satisfy curiosity, or just for the sheer adrenaline rush (Scrivner et al. 2021, 2022; Miller et al. 2024).

5.2 Techno-Wantons are People Too!

Another response to the arguments made here is to object to the suggestion that techno-wantonness really reduces personhood. The worry here is that in painting those who succumb to techno-wantonness as having diminished personhood, we paint them as having (at least temporarily) diminished moral worth, or that we’re taking an unpalatable position of moral superiority. Although a full discussion of the relationship between personhood and moral standing is beyond the scope of this article, it should suffice for now to simply point out that in established definitions of personhood pertinent for moral philosophy, personhood is usually defined along multiple axes or with multiple criteria. For example Warren (1973) outlines five criteria of which “self-motivated activity” is only one (also see Dennett 1976). The thought is that while all five constitute a “full person”, an agent might suffer diminishment along one or more axes while retaining their status as a moral patient. It does not necessarily follow, therefore, that because the harm of techno-wantonness is a diminishment in personhood along the axes of reflexive formation of second-order volitions, that the agent is less worthy of moral consideration. I am also not claiming that by characterising the harm as diminished personhood, techno-wantons should be targets of opprobrium or moral judgement. While some degree of temporary wantonness befalls us all at some point or another (and we should therefore be sparing in harsh judgement), it is still nevertheless important to recognise that preserving a reflexive will is important, and a useful protection against the further harms of indulging our first-order preferences for too long. As adaptive technology improves, and the fight for attention becomes more fierce, self-control of this kind will become a virtue characterising good and healthy human-technology relationships, and should therefore be cultivated (Vallor 2016).

5.3 Scaffolded Second-Order Volitions

Another question concerns whether adaptive technologies can hijack second-order volitions just as easily as they can first-order preferences. If I consistently engage with a certain kind of social media content, might my higher-order preferences come to reflect the prescriptive character of the content? An example of how this might look is to consider how strains of toxic, misogynist content on platforms like Youtube can begin by being targeted at a specific vulnerable audience (teenage boys, say), who might not have been exposed to such ideas yet (Regehr 2022). Over time though, the message of such content becomes something that they consider to be a good will i.e., ‘I should want to act as though women are objects and flash cars and watches are all that matter.’ As mentioned in an earlier footnote, it’s clear that our preferences (of the first and second-order variety) never form in a vacuum. We constantly stew in a pot of cultural influences and social affordances, and our second-order volitions are no more safe from these than are our first-order preferences safe from institutions seeking to exploit or oppress them. On this point, all that’s needed is to accept that technology, especially systems like social media algorithms that deliver content so effectively, will of course shift the kinds of things we care about, and the goals we think it’s fitting to pursue. The kind of will I wish to have is shifting over time constantly, oftentimes now influenced by technology as much as other cultural affordances. But what’s important here, for this account of techno-wantonness, isn’t the content of those preferences or volitions, but their structure. While imbibing toxic preferences is obviously a harm, it isn’t the harm captured by the account provided here; a person could have the most toxic economy of desires possible, but as long as the reflexive dynamic between second-order volitions and preferences is intact, their personhood (at least on this axis) remains intact.

6 Conclusion

In this article I introduced the concept of the techno-wanton. Drawing from Harry Frankfurt’s work on personhood, the figure of the techno-wanton is a wanton-like agent who, temporarily and in relation to a material scaffold, exhibits characteristics of Frankfurt’s wanton. I’ve argued that this concept should be used to describe situations in which the structure of a person’s will is degraded by their engagement with adaptive hostile scaffolding, such as some social media platforms, gaming environments, and conceivably, many future forms of highly-adaptive technology. Such technology is already beginning to use sophisticated forms of biofeedback to adapt more quickly and competently. It’s conceivable that systems using this technology will, in some sense, know what we want just as well as we do. I argued that there is a clear class of case in which techno-wantonness is to be preferred over Slaby’s notion of “mind invasion”, and that recent work has misapplied the mind invasion concept. Part of the motivation for this is that mind invasion fails to capture the subtlety and insidiousness of techno-wantonness. The distinction between technology that is hostile to our existing orientation, and technology that weaponises it against us to take our attention, is a real and important distinction. Although, it should be noted that the distinguishing line between mind invasion and techno-wantonness might not always be perfectly sharp. While the erosion of personhood is absent in the genuine cases of mind invasion described here, within which the structure of a person’s will can, and presumably often will, remain intact, it’s conceivable that some forms of mind invading scaffolds could successfully employ some of the techniques typical of technology designed to induce techno-wantonness.Footnote 10 I propose that by adopting this new concept, we sharpen our weapons for the critical analysis of emerging technology that is likely to increasingly threaten our ability to maintain robust agential control.