Across the technology press and wider discourses of human–technology relations, machine learning innovations are presented as making intelligent devices more flexible and intuitive—with automated assistants such as Alexa, Siri, and Cortana offering prominent examples. Amazon’s Alexa, for instance, can now whisper if she picks up that you are trying to be quiet, recommend a recipe for chicken soup if she senses you are ‘coming down with something’ (Fussell 2018), or ask about ‘a light you left on if she has a hunch that you did it unintentionally’ (Biggs 2019). Employing an algorithmic system called ‘Hunches’, the Amazon Echo correlates information from a user’s Alexa-enabled devices with ‘publicly available information such as timetables, clocks and weather patterns to develop an understanding of human habits’ and ‘intuit a user’s needs’ (Atkinson and Barker 2021, p. 58). Alexa can acquire expertise through learning over 3000 voice-activated Skills (app-like software services activated from a store), from playing songs to telling ‘Dad’ jokes. Yet, she is also continually training herself—recursively honing intuitive modes of anticipation, recognition, and responsivity through machine learning programmes in speech recognition and natural language processing which draw on neural network algorithms trained on millions of examples of repeated speech requests. The more that Alexa can passively acquire intimate, somatic, and behavioural data, the more pre-emptive she can be, anticipating requests before they are made and nudging emergent thoughts, behaviour, and relations into being.

Whether understood as a gut feeling based on experience, fast-thinking that bypasses rational deliberation, or the kind of data-driven hunch that Alexa manifests, intuition has long been vital to embodied and distributed modes of sensing, knowing, navigating, and transforming the world. For the French philosopher Henri Bergson, intuition is a way of knowing that entangles cognitive and sensory data to connect us viscerally with change as it unfolds. It is immersive engagement with material life that allows us to inhabit, if only fleetingly, the ‘continuous flux’ beneath the ‘sharply cut crystals’ of analytical thought (Bergson [1903]1912, p. 3). At the intersection of speculative philosophies and contemporary affect theories, intuition persists as a powerful orienting lens within conversations concerning not only how we navigate the sensory sinews of everyday life, but also how we might encounter pre-emergent social, cultural, political, and economic forces and relations (Williams 1977; Berlant 2011)—conversations which continue to evolve amid advancements in artificial intelligence via which our seemingly most internal instincts and insights are infiltrated by ‘algorithmic judgements, assumptions, thresholds and probabilities’ (Amoore 2020, p. 64). While intuition has always been more-than-human, as it develops via immanent interactions among minds, bodies, and environments (Pedwell 2022), the emergence of ‘artificial intuition’ enabled by algorithmic architectures trained on vast quantities of data illuminates how sensing, thinking, and speculating in computational cultures now extend across and are entangled with machines animated by inhuman agencies.

This article explores how, and with what critical implications, intuition became algorithmic. My focus is on the ways in which intuition, broadly conceived, has been understood as recursively trained through lived experience—and how interpreting intuition as ‘a trained thing’, as the late affect scholar and cultural theorist Lauren Berlant (2011) puts it, helps us grapple with what is at stake in the entanglements of sensorial, cognitive, computational, and corporate processes and (infra)structures that characterise contemporary algorithmic life. Contributing to my wider project of assembling a post-war affective genealogy of human–machine relations in North America and Britain oriented around shifting conceptualisations of intuition (Pedwell 2019, 2021b, 2022), I situate the recent rise of artificial intuition within broader techno-social encounters and atmospheres—spanning the decades surrounding the birth of the first digital computers after World War II to the roll-out of personal computing to the post-millennial consolidation of advanced machine learning technologies. In bringing affect theory and speculative philosophies to bear on computational histories and cultures, this approach enables me to tease out the continuing sensorial, socio-political, and ethical implications of post-war efforts to make intuition a quantifiable form of anticipatory knowledge. It also allows me to address what is both distinctive and troubling about the speculative training of human–algorithm capacities in the age of machine learning—while glimpsing affective potentialities for transformation that flicker persistently within these unfinished and contested genealogies.

As I will discuss, understanding intuition as a sensory-cognitive modality recursively reproduced through lived experience animates a vision of human cognition and sensibility as trained and trainable, but also ever permeable to environmental influence, whether generative or malign (or, more often, profoundly mixed). Recursion in machine learning systems, however, operates differently—far from replicating the psychic or neurological workings of human thought or memory, it involves the automated prehension of infinite data across durations incommensurable with human time, space, or sense perception (Parisi 2013; Clough et al. 2015). The somatic and behavioural data collected through algorithmic architectures are employed primarily for the purpose of the personality modelling necessary for personalisation, via which, as the AI researcher Luke Stark puts it, ‘individuals become part of “psychometric bubbles”, groups of “dividuals” imagined as atomized and individually manipulable’ (2018, p. 220; see also Clough 2018). Through these recursive systems, we are, it has been argued, perpetually trained and re-trained, disassembled and re-assembled as part of a giant corporate psychological experiment which generates endless harvestable data (Andrejevic 2013; Zuboff 2019). ‘Experiment’ here operates as an immanent virtual laboratory for capital and ‘speculation’ generates value through leveraging post-probabilistic uncertainties and incomputable data. Unpacking how intuition has been recursively trained within intelligent systems that span first and second wave AI, I will suggest, illuminates how the affective, ideological, and technological have become intertwined at the current conjecture—and the ensuing ramifications for present and future modes of collaborative imagination, speculation, and transformation.

The first section of the article traces how, between the 1930s to the 1980s, intuitive expertise consolidates as a honed capacity for pattern recognition enabling leaders to make effective decisions amid the ascent of personal computing technologies and nascent forms of neoliberalism. A significant flattening of intuition’s complexity or ‘chaos’ (Berlant 2011) occurs, I suggest, through its post-war travels across management, psychology, computer science, and neuroscience, via which it becomes a measurable and indexable mode of information processing—eliding the more expansive and ambivalent ways in which visceral response is trained in everyday life. Though, as I examine, mathematical genealogies offer a more ambiguous account of intuition’s amenability to formalisation and codification, while also prefiguring intuition’s more-than-human computational futures. The transition from information processing AI to affective computing and machine learning during the 1990s and 2000s paves the way for the emergence of artificial intuition as a generative, experimental, and speculative mode of algorithmic pattern recognition that entangles human and machinic propensities.

If twentieth-century computer pioneers conjured a future vision of thinking machines that side-stepped the thornier implications of quantifying intuition, the second section of the article asks what computational myths are at play in current accounts of machine learning-enabled sensing, thinking, and speculating—and what complexities and contradictions may be disavowed, repressed, or filtered out in the process. Rather than achieving precision through encountering what Bergson ([1903]1912) called ‘true differences in kind’, or cultivating intuition as a processual mode of affective navigation attuned to ‘unforeclosed experience’ as Berlant describes (2011, p. 5), data-driven hunches frequently reproduce a recursive loop of dominant cultural associations (Hallinan and Striphas 2016) or make probabilistic speculations on the basis of iterative biases and prejudices projected into the future (Chun 2021). Artificial intuition, from this perspective, may function primarily to extend ontopolitical modes of control as corporations and governing institutions seek to translate all human affect and action into data points for the generation of profit or political gain.

Reflecting on the centrality of indeterminacy and ambivalence to algorithmic recursion—and to digitally mediated social life more generally—I ultimately argue for an understanding of more-than-human intuition that recognises that visceral response is trained in multiple ways with diverse, and often contradictory, effects. Approaching intuition as a ‘trained thing’ also invites us to attend to the wider (infra)structures and ecologies that enable, shape, and/or limit collaborative modes of sensing, thinking, and speculating; and through which new recursive politics and possibilities may emerge.

Indexing intuitive expertise

Across speculative philosophies and interdisciplinary affect studies, intuition has been theorised as a sensory-cognitive mode of inhabiting social life that exceeds representational thought. In Cruel Optimism, for instance, Berlant describes intuition as a ‘process of dynamic sensual data gathering’ though which ‘we make reliable sense of life’—especially when habits and modes of navigating the world are disrupted amid the crumbling ‘social democratic promise of the post-Second World War period in the US and Europe’ (2011, p. 53, 3). Within the felt dynamics of everyday encounters that are not so much organised as disorganised by capitalism, Berlant suggests that intuition ‘works as a kind of archiving mechanism for the affects’, channelling sensorial intensities, churnings, and blockages into ‘habituated and spontaneous behavior that appears to manage that ongoing present’ (2011, p. 9, 19). Cruel Optimism’s cases range from the intuitive ‘rehabituation’ of affective life demanded amid the visceral destruction of the AIDS epidemic (2011, p. 53), to the various ways in which ‘a kind of love’ or ‘a political project’ promise to manifest ‘an improved way of being’ (2). Although the immanent education of intuition amid the crisis ordinariness of the present often takes shape in relation to ‘the predictable comforts of good-life genres’, it may also entail a ‘risk of attachment’ which ‘manifests an intelligence beyond rational calculation’ (2). If classical philosophical accounts of intuition associated with Plato and Descartes associate it with pre-existing knowledge which is externally valid (Chudnoff 2013, p. 2), Berlant thus insists that intuitive intelligence is not simply ‘autonomic activity’; rather, it is constituted recursively through lived experience and thus ‘visceral response is a trained thing’ (2011, p. 52).

Although writing in a different historical context and with disparate political sensibilities, Henri Bergson conveys a related understanding in An Introduction to Metaphysics when he describes intuition as a capacity that ‘every one of us has had occasion to exercise’ and yet one that is cultivated through empirical attention over time ([1903]1912, p. 19). Bergson employs the example of the intellectual labour of literary composition:

[W]hen the subject has been studied at length, the materials collected, and the notes all made, something more is needed in order to set about the work of composition itself, and that is often the very painful effort to place ourselves directly at the heart of the subject, and to seek as deeply as possible an impulse, after which we need only let ourselves go ([1903]1912, p. 21).

Intuition, here, is the ephemeral ‘impulse’ that exceeds analytical understanding, yet this ‘something more’ does not arise out of thin air; rather, it is the familiarity and discernment honed via a longer duration of intellectual engagement that makes intuition possible. As Bergson puts it, ‘we do not obtain an intuition from reality—that is, an intellectual sympathy with the most intimate part of it—unless we have won its confidence by a long fellowship with its superficial manifestations’ (21). So, while intuition as impulse operates otherwise to analytical thought, it is not divorced from this cognitive modality but works in tandem with it; intuition can be trained through systematic forms of attention within the wider flows of everyday experience.

In this particular way, Berlant’s and Bergson’s respective visions intersect with cognitive psychologies and philosophies which understand intuition as a trained mode of action-perception. Think, for instance, of how, as the psychologist David G. Myers puts it, ‘the violinist’s intuition is hard-earned. It is natural, graceful automatic processing wrought from thousands of hours of practice’ (2002, p. 29). Or consider the classic study by psychologists and computing pioneers Herbert Simon and William Chase (1973) which showed that expert chess players could intuitively reproduce the chess board layout after a mere five-second glance. What the chess grandmaster perceives, Simon later explains, is ‘not an arrangement of 25 pieces but an arrangement of a half dozen familiar patterns’ associated with memories concerning the danger each pattern holds and ‘what offensive or defensive moves it suggests’ (Simon 1987, p. 60). This understanding of intuition as a honed capacity for pattern recognition has been central to scholarship on expertise within psychology, philosophy, cognitive science, and management studies since the 1960s, which examines how ‘human experts, after years of experience, are able to respond intuitively to situations in a way that defies logic’ (Dreyfus and Dreyfus 1988, p. xiv). It is intuition, these literatures suggest, which constitutes the ‘final fruit of skill acquisition’ and drives much face-paced decision making by leaders in business, politics, education, and industry (1988, p. xx).

A founding text for intuition in management studies is the American business executive and organisational studies scholar Chester I. Barnard’s 1938 book The Functions of the Executive, which explores the differences between ‘logical’ and ‘non-logical’ bases for decision making. In a context in which executives ‘do not often enjoy the luxury of making their decisions on the basis of orderly rational analysis’, Barnard argues that they ‘depend largely on intuitive judgement’ (cited in Simon 1987, p. 57). With growing acknowledgement in the second half of the twentieth century that North America and Europe were entering a new knowledge and information economy, the case for a leader who relied on ‘the visionary and anticipatory qualities of intuition’ grew increasingly compelling (Lussier 2016, p. 716)—with the administration scholar John T. Kimball’s 1966 article ‘Age of the Intuitive Manager’ offering a salient example. As Kira Lussier discusses, a key argument within management theories during this period was that ‘overreliance on careful planning, established procedures, and authoritarian lines of hierarchy no longer sufficed in [a] competitive, complex, and ever-changing business climate’ (2016, p. 709). Intuition, in other words, was required for navigating fast-paced organisational and environmental transformation. As intuition became recognised as a core management trait in the 1970s and 1980s, an industry of consulting psychologists arose who claimed that intuitive problem solving could be taught through seminars and enhanced via workplace conditions that ‘tolerated complexity, messiness and even chaos’ (714). That intuitive management practices were figured as important in organisations experiencing ‘budget shortages, downsizing, or outsourcing’ to allow ‘companies to extract more productivity out of existing employees’ (714) highlights the emergent links among intuition, neoliberalism, productivity, and profit-generation animating such discourses and practices.

Meanwhile, mathematicians had long contemplated the role of intuition in computational logic and reasoning. Such debates galvanised around a series of logic problems laid out by the German mathematician David Hilbert in 1900 which, alongside the publication of Principia Mathematica (1910–1913) by the British mathematician-philosophers Alfred North Whitehead and Bertrand Russell, explored the possibility of formalising all mathematical logic to eliminate theoretical uncertainties. In response, the founder of the philosophy of ‘intuitionism’, Dutch mathematician L.E.J. Brouwer ([1927]1975), defended intuition as a cognitive activity vital to mathematical knowledge-building which runs counter to the automated theorem proving entailed by formalism. From the 1960s, mathematicians began to approach intuition as fundamentally linked to context and experience and, in that sense, trained. Writing in Science in 1967, for instance, the American mathematician R. L. Wilder describes mathematical intuition as ‘an accumulation of attitudes derived from one’s mathematical experience’ which is formed ‘by the cultural environment’ and is ‘of immediate importance to creative work’ (1967, pp. 605–606). On one hand, these accounts of mathematical intuition seem to preserve it as an immanent human propensity resistant to formalisation, mechanisation, or codification. On the other hand, the very notion of intuition as trainable resonated with studies of intuitive expertise in management and psychology and bolstered interest within computer science concerning how intuitive knowledge and decision making could be engineered in machines.

Since the coining of the term ‘Artificial Intelligence’ at a 1956 summer workshop at Dartmouth College led by the mathematician John McCarthy (which involved Herbert Simon, Allen Newell, Martin Minsky, Claude Shannon, and others), AI research had investigated how ‘machines could simulate aspects of human intelligence: the ability to sense, reason, make decisions and predict the future’ (Fan 2019, p. 18). Following the advent of digital computers in Britain and North America after World War II, pioneering work in computer science wagered that the computational manipulation of symbols offered the key to engineering ‘thinking machines’ (Turing 1950)—an insight which formed the foundation of first wave AI’s logic-based approach. A year prior to the Dartmouth workshop, in 1955, Newell and Simon had created the ‘Logic Theorist’, a programme that eventually proved 38 of the first 52 theorems in Russell and Whitehead’s Principia Mathematica. In his influential 1959 intervention, ‘Machines with Common Sense’, McCarthy unveiled speculative plans for a logic-based programme called the ‘Advice Taker’, to be co-created with Minsky, which would improve its behaviour solely on the basis of statements made to it about its ‘symbolic environment and what is wanted from it’ (1959, p. 4). If computers could be trained to abstract, generalise, and learn from their own knowledge via higher-order logic, computer scientists speculated, they would cultivate an increasingly intuitive intelligence.

The growing adoption of personal computers in homes, schools, and workplaces on both sides of the Atlantic (and far beyond) from the early 1980s set the stage for enhanced theories of intuitive expertise amid growing public interest and anxiety concerning people’s changing relationships with ‘new’ technologies. Extending earlier cybernetic thinking, Simon held that the modern digital computer offered illuminating models of human thought which highlighted how, for human experts and AI systems alike, rapid and intuitive decision making depends on the honed recognition of ‘chunks or patterns stored in long term memory’ (1987, p. 61). The central research and development task ahead, Simon contends, is to extract and catalogue ‘the knowledge and cues used by experts in different kinds of managerial tasks’ so that this information can be automated by computers (1987, p. 39). Simon articulated this imperative amid the revolution of the personal computing industry associated with the Apple corporation’s launch of its Macintosh computer in 1984, following the engineering of the first ‘true personal computer’ in 1973 by Xerox Corporation’s Palo Alto Research Centre (Turkle 1995). This period also witnessed the rise of expert systems, logic-based AI programmes that, by the 1980s, were being ‘used experimentally to help physicians diagnose diseases, as well as commercially to help geologists locate mineral deposits and to aid chemists in identifying new compounds’ (Rheingold 1985, p. 23). In these conditions, Simon anticipates a ‘highly interactive’ future in which ‘knowledge and intelligence [will be] shared between humans and components of the system’ (1987, p. 61).

Others, however, were more wary of artificial ‘decision aids’ and ‘expert consultants’ (Simon 1987, p. 61), as well as wider claims concerning the possibility of automated intuition. Contesting Newell and Simon’s announcement in 1958 that ‘intuition, insight, and learning are no longer the exclusive possessions of human beings and any large high-speed computer can be programmed to exhibit them’(paraphrased in Dreyfus and Dreyfus [1985]1988, p. 3), the American philosopher Hubert Dreyfus insisted that human intelligence was fundamentally different from computer intelligence and that without embodied knowledge computers were incapable of intellectual tasks that required intuition and experience. First articulating this position in a combative 1965 review of Newell and Simon’s AI research for the RAND corporation (the national research thinktank offering analysis to the US military), Dreyfus argued in his 1985 book, Mind Over Machine: The Power of Human Intuition and Expertise in the Era of the Computer, that computers ‘can apply rules and make logical inferences at great speed and with unerring accuracy’, but what they lack is the ‘intuitive intelligence that enables us to understand, to speak, to cope skilfully with our everyday environment’ (Dreyfus and Dreyfus [1985]1988, p. xx; see also Dreyfus 1972). Dreyfus thus questions the very possibility of automated intuition if intuition is, by definition, an embodied, visceral, and situated capacity—laying conceptual groundwork for future philosophers, sociologists, STS scholars, and digital media researchers to address the (im)possibilities of designing AI with genuine contextual awareness (Pedwell 2022).

While Simon and Dreyfus articulate opposing perspectives on, and affective orientations towards, the future of human–machine relations, I am interested in how they both highlight ‘the arational’ as central to human behaviour and figure intuition as a recursively honed mode of recognition; one, that is, which can be cultivated and trained. Acquiring intuitions, Simon suggests, is a process of ‘shaping habits of attention’ which combines deliberate action with less-than-conscious learning and responsivity (1987, p. 61). In this vein, Dreyfus describes how, at the first level of skill acquisition, beginners must focus carefully on what they are doing and the theories behind it, whereas the highest levels of expertise involve the ability to ‘intuitively respond to patterns without decomposing them into component features’ (Dreyfus and Dreyfus [1985]1988, p. 28). Disrupting traditional associations of advanced intellectual performance with detached rational thought, Dreyfus elevates close attention to concrete objects and processes as central to human expertise—in ways that speak to both Bergsonian intuition and contemporary affect theories. Elements of this line of thinking are reflected in more recent writings informed by behavioural economics such as ‘nudge’ theory and neuromarketing, which draw on Simon’s influential (1945) book Administrative Behaviour to figure behaviour change as best approached through less direct, and sometimes less-than-conscious, strategies that work affectively through modes other than reasoning or proscription (Pedwell 2017, 2021a).

If, however, mainstream cognitive psychologists, philosophers, and behavioural economists assume a relatively bounded individual subject and pay scant attention to the politics of intuition, Berlant, in line with affect theory more broadly, is interested in collective practices of anticipation in which ‘affect meets history, in all its chaos, normative ideology, and embodied practices of discipline and invention’ (2011, p. 52). People’s ‘styles of response to crisis’ are, Berlant suggests, ‘powerfully related to the expectations of the world they had to reconfigure’ in the face of tattered post-war promises of social reciprocity (2011, p. 20)—though never in predictable linear or grid-like ways. Indeed, ‘normative affect management styles’ do not, Berlant emphasises, ‘saturate the whole world of anyone’s being’ (20). Rather, they are always unfolding and shot through with contradiction and ambivalence. Grappling with intuition’s lived dynamics, then, requires, as the late affect scholar Eve Sedgwick puts it in another context, ‘the simple, foundational, authentically very difficult understanding that good and bad tend to be inseparable at every level’ (2011, p. 136).

What is interesting and troubling (if not surprising) from this perspective, is how Simon, Dreyfus, and other theorists of expertise assume that ‘good’ intuition is separable from ‘bad’ intuition—that the expert’s ‘arational’ judgement borne of experience can be discretely parsed from ‘irrational’ psychic or affective responses, or indeed from habitual forms of social privilege that shape embodied ways of sensing, perceiving, and inhabiting the world. Dreyfus insists, in this vein, that intuition must be distinguished from ‘irrational conformity, the re-enactment of childhood trauma, and all other unconscious and noninferential means by which human beings come to decisions’ (Dreyfus and Dreyfus [1985]1988, p. 29). Similarly, for Simon, ‘the intuition of the emotion-driven manager is very different from the intuition of the expert’; an affective ‘response to primitive urges’ should not be confused with expert decision making (1987, p. 62). At pains to bestow intuition with analytical purchase and clarity, such interventions frame intuitive expertise not simply as arational or non-analytic thought per se, but rather ‘the product of deep situational involvement and holistic discrimination’ (Dreyfus and Dreyfus [1985]1988, p. 29).

Yet, the assumption that expertise can be neatly cordoned off from the many other ways in which ‘visceral response’ is immanently trained (Berlant 2011) belies a wilfully limited account of how minds, bodies, and environments transact to produce ‘habits of attention’ (Simon 1987)—which, of course, precludes the possibility that expertise will be intertwined with, or enabled by, normativity, exclusionary values, or prejudice. It is clear that attempts to distinguish cognitively led expert forms of intuitive judgement from affectively saturated responsivity rooted in ‘irrationality’ or ‘primitiveness’ reproduce pernicious gendered, racialised, and classed frameworks for assessing legitimate knowledge and authority. They also, however, fail to address the unfolding interplay of cognitive and affective processes that I suggest guides everyday modes of knowing, navigation, and speculation, as well as immanent human–machine relations in a computational age (see Clough 2018; Blackman 2019). The result, I argue, is a whitened and masculinised conceptualisation of intuition that is effectively stripped of both its sensory elements and its immanent relationship to logics and technologies of regulation, power, and control.

Or, perhaps what is at stake here is not the elision of the sensory per se, but rather the exaltation of particular forms of affective management. Without becoming overwhelmed by anxiety, fear, or exhilaration, for instance, the chess grandmaster must rapidly access and act on memories of ‘a half dozen familiar patterns’ (Simon 1987) associated with sensory signals linked to danger, apprehension, and/or the thrill of immanent victory. As such, part of what defines expertise here, and differentiates ‘the expert’ from ‘the novice’ or ‘the unskilled’, is the rigorously honed (and thus less-than-conscious) capacity to regulate and channel affect for the production of advantage or value. When mobilised uncritically, this understanding of expertise recalls a long genealogy of biopolitical discourse in which, as Kyla Schuller writes, civilised bodies were figured as ‘receptive to their milieu and able to discipline their sensory susceptibility’, whereas uncivilised bodies were ‘impulsive and insensate, incapable of evolutionary change’ (2018, p. 14)—imperialist logics which Bergson, alongside other nineteenth- and twentieth-century philosophers, was not exempt from reproducing (Bennett 2015; Pedwell 2021a).

It becomes evident, from this perspective, how attempts to extract intuition from ideology and the immanent workings power often have the opposite effect of foregrounding such imbrications. Indeed, if the recursive training of intuition is, in part, about how, in Berlant’s words, ‘affect meets history, in all its chaos’, then affect and politics are always already intwined and intuition is not a neatly bounded, separable, or sanitised intellectual capacity; rather, as an unfolding set of relations, it is messy, ambivalent, and never fully amenable to human control. While varying literatures on intuitive expertise envision cognitive-sensory modes of pattern recognition as affording experts increasing mastery over their conduct and environment, Berlant suggests that, in the context of fraying ‘post-war fantasies of the good life’, what is at stake is not the possibility of amassing ‘expertise enough the master the situation’ but rather only ‘a commitment to cultivating better intuitive skills for moving around this extended, extensive time and space’ (2011, p. 15, 59).

The conceptual flattening of intuition’s complexity or ‘chaos’ can be traced, in part, to its transatlantic post-war travels across psychology, management, computer science, and neuroscience, via which it becomes a professionally palatable mode of cognitive information processing. As Lussier notes, early advocates of intuitive management ‘faced scepticism from business leaders, who tended to see intuition as mysterious, unexplainable, or overly emotional’ (2016, p. 709). As one management writer recalls, you would not ‘be taken seriously talking about “hunches and gut feelings—not until you could index them”’ (709). A range of techniques, ‘from personality tests and brain scans to creativity seminars and assessment centres’ were thus developed in the 1970s and 1980s to both cultivate intuition and make it measurable (709). Arising alongside the corporate roll-out of computing technologies, this redescription of intuition via the language of information processing resonated with, and was legitimated by, the rise of cognitive science—which, as Elizabeth A. Wilson argues, ‘established cognition as a world unto itself cut off from other ecologies’ (2010, p. 64), including affective, psychic, and social ones. Relatedly, endeavours like the CyC project in 1980s computer science sought to make expert systems more intuitive by codifying human common sense, as if the possession of logically organised, machine-readable information equated to visceral intuitive navigation and sense-making.

If aligning intuition with information processing resonates, on one hand, with the synergies between computers and brains established by cybernetics and subsequent work in AI and computer science in the wake of ‘the Turing Test’ (Turing 1950), intuitive management discourses, on the other hand, call for the cultivation ‘of visionary leadership that could not be outsourced to computers or clerical staff’ (Lussier 2016, p. 718). As suggested earlier, intuition’s ambiguous relationship to ‘the human’ can be traced through twentieth century mathematics and computer science. In ‘Systems of Logic Based on Ordinals’, for example, Turing argues that mathematical reasoning depends on an iterative relationship between intuition and ingenuity. While intuition is vital to mathematical discovery and involves ‘making spontaneous judgments which are not the result of conscious trains of reasoning’, ingenuity consists of ‘suitable arrangements of propositions, and, and perhaps geometrical figures or drawings’ (1939, pp. 214–215). In the face of the mathematical formalism pursued by Hilbert and Whitehead and Russell, Turing’s perspective could be read as defending mathematical intuition as the preserve of human embodied cognition. I would suggest, however, following the political geographer Louise Amoore, that Turing’s account is more generatively interpreted as signalling how, even in the mid-twentieth century, ‘the human and machinic elements of mathematical learning … are not so readily disaggregated’ (2020, p. 57). In this way, histories of mathematics reflect intuition’s more-than-human qualities in ways that anticipate how the ‘extended intuition of machine learning’ entangles human and algorithmic capacities to ‘feel its way towards solutions and actions’ (2020, p. 67).

To say that intuition is a ‘trained thing’, then, conveys a variety of meanings and implications. If twentieth-century literatures on expertise figure intuition as ‘a rapid, fluid, involved’ mode of intellectual perception honed within conducive organisational environments (Dreyfus and Dreyfus [1985]1988, p. 28), intuition is framed across speculative philosophies and affect theories as the sensory-cognitive product of unfolding mind–body–environmental interactions, wherein ‘environment’ is conceptualised in the broadest sense. Experience, from the latter perspective, could entail the mix of purposeful action and passive learning mobilised by the violin player, chess master, mathematician, or business executive, but also much wider worldly encounters through which everyday modes of anticipation, knowing, and responsivity are continually educated and (re)formed—from early family life and educational or vocational settings to the visceral dynamics of inhabiting a gendered body or the quotidian violence of white supremacy. Our embodied hunches, then, are unfolding stories about intimate psycho-somatic histories, and yet, given how intuition imbricates the biological, physiological, social, cultural, and political all the way down, they are also never really ‘our own’. We might understand gut feelings, from this perspective, as an (im)material junction point for affect and ideology, which simultaneously disrupts any assumption that such forces are ontologically separable. Within the ambivalent affective atmospheres surrounding twentieth-century advancements in digital technologies, intuition mediates increasingly uncertain and changing human–machine relations—itself being actively (re)made to suit shifting socio-technical and politico-economic interests and requirements.

This is not, however, to suggest that intuition is only about the reproduction of social normativity, or that it is simply a transcription of a psychological and socio-political state. Indeed, in both Bergson’s writing and Gilles Deleuze’s later account of ‘Bergsonism’, intuition brings together ‘experience and experiment’ to produce speculative knowledge oriented towards possibility and discovery (Seigworth 2006). Similarly, for Berlant, intuition combines ‘discipline and invention’ (2011, p. 52) and is, as such, aligned with ‘rhythms and routines that “never quite settle into shape” and, so, can be recalibrated towards lighter, looser constellations of feeling-forward together’ (Berlant 2011, p. 93 cited in Seigworth 2017). As the next section explores, the post-millennial rise of artificial intuition opens up new (and old) questions concerning the training of intuition as algorithmic systems increasingly re-distribute sensation, perception, and cognition across humans and machines—and mobilise speculation and experimentation as extractive technologies for the generation of value and profit.

The rise of artificial intuition

While the ‘information-processing’ AI pioneered by Simon and Newell had been the dominant paradigm since the 1960s, it demonstrated a persistent lack of flexibility and intuitive common sense knowledge (Suchman 2007; Cantwell Smith 2019). By the late 1980s, however, a new approach was gathering pace which wagered ‘that robust intelligence would emerge, not from cognitive processing of symbols but from the agent’s direct, embodied interactions with the world’ (Wilson 2010, p. 70). Extending cybernetic thinking of the 1940s and 1950s, including the neurophysiologist Warren McCulloch’s and the mathematician Walter Pitts’ founding research on artificial neural networks and Frank Rosenblatt’s perceptron, which pioneered the idea of using neuroscience to guide learning machines, connectionism treated the computer as a ‘evolving biological organism’ and mobilised ‘learning algorithms’ that were much better at dealing with change than traditional AI (Turkle 1995, p. 131). Connectionists revived work on neural networks, which had stalled during the 1970s, to explore how they could learn and handle information in a more flexible and intuitive way than symbolic processing AI.

Meanwhile, at MIT Media Lab from the mid-1990s, what the roboticist Rodney Brooks called ‘novelle AI’ sought to build artificial agents which demonstrated a responsive distributed intelligence with ‘the capacity for growth’ (Wilson 2010, p. 4). Leading up to the new millennium, other MIT researchers set out to design virtual agents with emergent intelligence, such as Patty Maes’ email-sorting agents which learned ‘through receiving feedback on their performance’ (Turkle 1995, p. 99) and envisioned early forms of domestic and wearable AI, including Alex Pentland’s plans for ‘smart rooms’ and ‘smart clothes’ that would respond intuitively to users’ thoughts and behaviours via their ‘perceptual intelligence, and capacity to learn independently’ (Atkinson and Barker 2021, pp. 35–36). This new AI animated an intuitive intelligence cultivated directly via ongoing environmental interactions populated by sensory, perceptual, and behavioural data.

The term ‘artificial intuition’ would gain increasing salience in the decades to follow amid major advances in machine learning enabled by increased hardware capabilities and an exponential growth in available data. In their recent survey of tech journalism, Jacob Johannsen and Xin Wang chart the rise of artificial intuition as an industry buzzword referring to the ability of ‘AI systems to make intuitive choices and respond intuitively to problems’ through ‘subconscious pattern recognition’ (2021, pp. 175–176). Across technology circles and wider public culture, machine learning innovations are presented as allowing intelligent technologies—from self-driving cars to internet search engines to automated home assistants like Alexa—to operate in a more fluid and intuitive way. Similar to the computerised ‘expert consultants’ Simon envisioned in the 1980s, smart devices like Alexa are marketed as enabling users to multi-task while boosting productivity; yet unlike Simon’s digital agents or Pentland’s smart rooms, Alexa has ubiquitous access to vast quantities of networked data. Through the ongoing extraction of personal and sensory data, machine learning systems can access latent features by tracking correlations in extraordinary statistical detail—thus enabling states, corporations, and other powerful actors to anticipate and shape human choices, feelings, and actions in unprecedented ways.

From the 1980s, high-level collaboration between mathematics, economics, and neuroscience had led to the integration of probability and decision theory into AI—including the development of Bayesian networks (Pearl 1985). Developing insights from the eighteenth-century mathematician Thomas Bayes, who offered ‘a novel way to reason about the probability of events’, Bayesian networks proved a powerful tool in machine learning technologies—often combining with neural network algorithms to allow ‘AI to learn adequately despite imperfect data’ (Fan 2019, p. 46). In 1986, the psychologist David Rumelhart, with computer scientists Geoffrey Hinton and Ronald Williams, advanced a method for training neural networks called ‘backpropagation’ that ‘works by attributing reduced significance to an event as it moves further back in the chain of events’ (Hayles 2022, p. 637), enabling the development of machine learning algorithms for natural language processing, visual image classification and analysis, and machine translation. For Amoore, the re-making of eighteenth-century rules of chance via Bayesian inference models, alongside the development, from the 1990s, of advanced data mining techniques, signalled the infiltration of ‘the intuitive and the speculative within the calculation of probability’ (2013, p. 44). This partial shift from strict probability to speculative possibility is, in conjunction with the design of advanced evolutionary algorithms, crucial to the post-millennial rise of artificial intuition. It also signals the moment in mathematical logic when intuition stretches more dramatically beyond its humancentric framings to become a trainable algorithmically mediated capacity.

A key feature linking understandings of artificial intuition across AI, computer science, and the technology press is that these machine learning programmes are abductive rather than deductive. That is, unlike ‘deductive reasoning by hypothesis testing’, they ‘deploy abductive reasoning so that what one will ask of the data is a product of patterns and clusters derived from the data’ (Amoore 2020, p. 47). Artificial intuition is therefore fundamentally experimental and generative; using advanced forms of pattern recognition it discovers ‘associations and relations otherwise unknowable’ (2020, p. 53). In this vein, emergent research in computer science associates artificial intuition with deep neural nets operating in conditions of ‘radical uncertainty’ (Prokpchuk et al 2021) to gain ‘understanding of reality beyond what is specified in a data set’ (Le Cunn 2021). Often working with raw and unlabelled data streams, such programmes employ unsupervised or self-supervised learning to map the structures and patterns of their input data and identify ‘hidden correlations’. Following the advent of transformer models in 2017, which employ an attention mechanism that ‘consists of several attention layers running in parallel’ (Vaswani et al. 2017, p. 4), generative AI—including large language models (LLMs) like OpenAI’s ChatGPT-3 (released in November 2022)—can now generate text, images, other media in response to a prompt. Focussing on ‘a word in the context of a sequence’, LLMs generate ‘probability for the importance of a word relative to other words in the phrase or sentence’ (Hayles 2022, p. 639), essentially seeking to compute ‘human context, meanings, patterns of behaviour and possible futures’ (Amoore 2020, p. 89). For the computer scientist and literary theorist N. Katherine Hayles, generative AI thus acquires ‘a kind of intuitive knowledge’ derived from ‘the intricate and extensive connections that it builds up from the references it makes from its training dataset’ (2022, pp. 648–649).

But what actually happens at the levels of data, procedure, and logic in the training of artificial intuition and how can we understand the current and future implications of such processes? If, in the 1970s and 1980s, Simon and Dreyfus each sought to separate ‘good’ intuition from ‘bad’ intuition by assuming that the honed pattern recognition of the (usually white, male) expert could be free of ‘primitive’ irrationality or affective contamination, and if efforts across management studies and cognitive science during this period to transform ‘mysterious’ and unwieldy (read feminised and racialised) modalities of intuition into a measurable form of information processing sought to achieve objectivity and precision by aligning the human mind with the digital computer, what computational myths animate contemporary accounts of artificial intuition and what ‘chaos’ may be disavowed, repressed, or filtered out in the process?

In its The Future Computed series, Microsoft conjures a speculative vision of the year 2038 in which human capacities are enhanced by ‘the unmatched ability of AI to analyze huge amounts of data and find patterns that would otherwise be impossible to detect’—activating forms of artificial intuition which will ‘help doctors reduce medical mistakes, farmers improve yields, teachers customize instruction and researchers unlock solutions to protect our planet’ (Smith and Shum 2018, p. 6). Yet, as digital media scholars have compellingly argued, the socio-technical and affective-algorithmic processes underlying such innovations are far from objective; rather, they can be riddled with error, problematically reductive, and embedded with bias, prejudice, and exclusion (Noble 2018; Benjamin 2019). Such operations are also, of course, intimately entangled with the global architectures of surveillance capitalism, which entails the strategic melding of behavioural economics, psychology, computer science, and machine learning in extractive forms of AI that, as Shoshana Zuboff notes, seek to pre-emptively ‘nudge, coax, tune and herd behaviour towards profitable outcomes’ (2019, p. 8)—while normalising the surrender of intimate personal data as the inevitable requirement of inhabiting a world configured by digital technologies.

In terms of procedure, artificial intuition entails rapid modes of algorithmic recognition enabled by recursive practices of categorisation. Large accumulations of things (i.e. images, voice commands, or sentiment data) become ‘vectors in a dataset’ that is used to train a machine learning device (i.e. a neural network algorithm) to classify subsequent items probabilistically on the basis of ‘learned rules of association’. These rules then ‘generate predictive and classificatory statements’ (i.e. ‘this is a cat’, ‘this is a request to turn the lights off’, or ‘this is an expression of sadness’) (McKenzie 2017, p. 11). In other words, machine learning programs are trained to generalise in order to recursively categorise new items not included in the original dataset. As honed within cybernetic approaches of the 1940s and 1950s, the term ‘recursive’ here constitutes a form of feedback in which the outcomes of past actions are taken as inputs for future action (Wiener 1948). Artificial intuition thus entails the capacity of machine learning algorithms to ‘to engage experimentally with the world, to dwell comfortably with contingent events and uncertainties, and yet always be able to propose, or output, an optimal action’ (Amoore 2020, pp. 12–13).

And yet, while algorithmic systems are often referred to as ‘classifiers’ (McKenzie 2017), emergent AI technologies combining deep learning and reinforcement learning excel at negotiating ambiguity, intermediate cases, and noisy data because they may not actually have to categorise or discretely separate inputs from outputs at all. For example, as the philosopher of science Brain Cantwell Smith explains, humans ‘may classify other drivers as cautious, reckless, good, and impatient’, but driverless cars can avoid discrete categories all together by tracking ‘the observed behavior of every single car ever encountered’, and contributing to a virtual ‘profile of every car and driver in excess of anything humanly or conceptually graspable’ (2019, p. 59). Through mining individual medical records and DNA sequences, ‘personalised’ medicine similarly promises to ‘get in underneath the categories’ in order to attend intuitively and speculatively to ‘subconceptual terrain’ (2019, p. 58, 57) in ways unavailable to earlier symbolic processing AI—an account that could be seen to echo the entanglement of ‘experience and experiment’ animating Bergsonian intuition.

If, however, for Bergson, intuition is experience prior to, or in excess of, its translation into the parsing categories of analytical thought, within artificial intuition, unfolding somatic, physiological, and affective experience must, of course, be translated into computational form (i.e. binary 1’s and 0’s). One of the main functions of algorithmic architectures is thus to ‘render calculable some things that hitherto appeared intractable to calculation’ (McKenzie 2017, p. 8)—dynamics which encapsulate what the digital media scholar Ed Finn (2015) calls the ‘computational imperative’: a wager that all complex systems could be modelled quantitatively that finds its roots in Turing’s ‘universal machine’ (1936). While management psychology in the 1970s and 1980s focussed on how intuition, as a human capacity, could be measured and indexed, artificial intuition deals only with what is legible in computational terms and discards everything else. At play in such machine learning processes is not, evidently, the free flow of affect and experience but rather a narrow technical, and frequently profit-driven, mediation of everyday life in which unfolding intensities are flattened and fixed, complex relationalities are made linear, and contextual nuances and ambivalences may be rendered unintelligible. In the course of the algorithmic operations that constitute artificial intuition, then, something is inevitably elided, lost, or repressed—there is always a remainder which resists translation into computational form (Finn 2015; Blackman 2019).

When, for instance, Facebook, in 2014, infamously experimented with changing the affective valence of almost 700,000 user feeds to assess its capacity to extract, read, and modulate individual and collective moods, it interpreted the emotional tone of user-posted content through sentiment analysis, a computational technique involving ‘the tabulation and classification of common words for emotional expression based on their frequency’ (Stark 2018, p. 214). If the affective richness underlying Facebook status updates, comments, and posts is already mediated and/or parsed in line with the interface’s affordances and the corporation’s profit-driven imperatives, sentiment analysis further condenses and abstracts such sensorial dynamics, providing ‘statistical proxies for affective intensities [which can] displace reference, meaning and comprehension’ (Andrejevic 2013, p. 54). A similar point can be made about facial recognition systems, such as Microsoft’s Face API and Amazon’s Rekognition tool, which draw on large databases of images of facial expressions coded according to universalist frameworks, and employ deep learning techniques with the aim of probabilistically detecting and classifying emotion (Crawford 2021, p. 155). Relying on and (re)producing reductive emotional typologies, these machine learning systems seek to assemble and intervene in ‘an aggregate feeling tone’ (Andrejevic 2013, p. 46), but are ill-equipped to discern processes of affecting and being affected that are immanently entangled with ecological conditions—relational dynamics which are not, as the affect scholar Kathleen Stewart puts it, the kind of ‘object that can be laid out on a single, static plane of analysis’ (2007, p. 4).

For the corporations that produce and utilise such technologies, this lack of affective nuance may seem inconsequential if the behavioural data extracted nonetheless yields ‘a kind of thin-slicing or pulse reading of the Internet’ which generates profitable correlations (Andrejevic 2013, p. 46) for the creation of ‘prediction products’ to be traded on ‘behavioral futures markets’ (Zuboff 2019, p. 8, 10). Yet, it does suggest a need to think more carefully about what exactly artificial intuition is and does, and, by extension, about the workings of algorithmically mediated sensation, experience, and social life more generally. If, at the intersection of affect studies and speculative philosophies, intuition is a sensory-cognitive mode of connecting with ‘literally moving things’ that ‘do not have to await definition, classification, or rationalization before they exert palpable pressures’ (Stewart 2007, p. 4, 3; Berlant 2011), artificial intuition works in the opposite direction. While machine learning systems engage with emergence and change in that they continually adjust their parameters of recognition on the basis of the data they encounter, each decision they make depends on converting affective flux into binary form; on, that is, reducing complexity and multiplicity to a single output.

Within such recursive computational processes, the issue of precision emerges as significant. For early computer pioneers and contemporary Big Tech alike, enhanced precision is key to how AI extends human capacities: how, for instance, machine learning algorithms can detect and predict patterns with increased speed and accuracy—purportedly unsullied by the biases, blind spots, and irrationality clouding human decision making. Bergson, however, understands precision in a difference valence. If analytical thought generally begins with concepts and applies them to things, Bergsonian intuition seeks to ‘invert the habitual direction of the work of thought’ by starting with things themselves—which means that each new object approached requires ‘an absolutely fresh effort’ (Bergson [1903]1912, p. 9). This is how Bergsonian intuition aims to engage ‘true differences in kind’ and, as such, precision is aligned here not with objective pattern recognition but rather with the appreciation of ‘radical novelty’ (Bergson [1934]2019). By contrast, artificial intuition operates via a logic of precomputation which seeks to ‘make all actions imaginable in advance, to anticipate every encounter with a new subject of object’ (Amoore 2020, p. 79). Although such operations may be precise within the parameters of a given algorithmic configuration, the imperative here is not to be radically open to the future but rather to accurately recognise and optimise present and future objects on the basis of past knowledge.

Significantly, precision within algorithmic systems is also not a synonym for objectivity, given that, as the AI researcher Kate Crawford puts it, ‘every data set used to train machine learning systems, whether in the context of supervised and unsupervised learning, whether seen to be technically biased or not, contains a world view’ (2021, p. 135). When it comes to artificial intuition, then, training data is central to shaping ongoing decisions concerning accuracy, truth, and value—which, I suggest, provides new layers of meaning to Berlant’s notion that intuition is a ‘trained thing’. Take, for instance, ImageNet, a benchmark training set for digital object recognition launched by Stanford University in 2009, which was pivotal to the deep learning revolution of the 2010s (Fan 2019) and the subsequent rise of artificial intuition. Yet, as Crawford discusses, a cursory review of the root categories organising ImageNet reveals a taxonomy of images that ‘looks like madness …. veer[ing] wildly from the professional to the amateur, the sacred to the profane’ (2021, p. 137). In her assessment, ImageNet’s ‘chaotic enumeration’ is indicative of the taxonomic politics of many AI training sets—which raises urgent questions concerning the nature and implications of algorithmically mediated knowledge.

If intuition is, as Berlant suggests, an ‘archiving mechanism’ that enables everyday (and extraordinary) forms of anticipation and navigation, what kinds of knowledge, assumptions, and ‘truths’ are being archived and fed forward by intuitive machine learning architectures and the value-laden production of ‘reality’ their categorical systems entail? And, consequently, what more-than-human modes of recognition and speculation are we training in these algorithmic architectures and, in turn, in ourselves? For Berlant, not unlike Bergson, intuition’s efficacy as a sensory-cognitive mode of navigation amid changing socio-political conditions depends, in part, on its resistance to rigid systematisation; on its capacity, that is, to encounter ‘the present affectively as immanence, emanation, atmosphere, or emergence’ (Berlant 2011, p. 6). Contemporary forms of artificial intuition, however, exercise an ‘archiving of the future’ via which ‘particular future connections are condensed from the volume of the data stream and rendered calculable (Amoore 2020, p. 49). The promise of intuitive AI is thus that ‘everything can be rendered tractable, all political difficulty and uncertainty nonetheless actionable’ (2020, p. 55).

The empirical, epistemological, and ethical implications of the techno-social questions and tendencies outlined above are brought into further relief when we consider how frequently machine learning classifications rely on and (re)produce social hierarchies, exclusions, and prejudices. As Wendy Hui Kyong Chun explores, many immanent machine learning decisions concerning what is recognisable, likely, or true must correlate with ‘a highly curated past’—recursive logics which can ‘automate and amplify past inequalities through their base line correlations’ (2021, p. 52, 59). That is, ‘if the captured and curated past is racist and sexist, these algorithms and models will only be verified as correct if they make sexist and racist predictions’ (2021, p. 47). While Bergsonian intuition seeks to achieve precision through connecting with ‘what is unique’ in an object (Bergson, [1903]1912, p. 7), algorithmic intuition proceeds here via ‘correlations that lump people into categories based on their being “like” one another’ in ways that exacerbate historic hierarchies and antagonisms among groups (Chun 2021, p. 59). For a machine learning system to have a ‘hunch’ in such conditions can thus essentially mean that it is making probabilistic speculations on the basis of iterative biases, stereotypes, and prejudices projected into the future. While human intuitive expertise may mix cultivated forms of skill with trained forms of prejudice, the computational architectures of artificial intuition can work logistically to amplify and extend social inequities and injustices at scale—enabling what Chun calls ‘pattern discrimination 2.0’.

Of course, that which a given algorithmic programme is seen to be anticipating or predicting it may in fact be actively nudging into being—computational dynamics which highlight the pivotal role of machine learning systems within the pre-emptive workings of what the philosopher Brian Massumi terms ‘ontopower’, an intuitive power to incite and orient emergence that ‘insituates itself into the pores of the world where life is just stirring, on the verge of being what it will become and yet barely there’ (2015, p. xviii). If, for Bergson, intuition is rare, fleeting and ‘even painful’ because ‘the mind has to do violence to itself’ in reversing the direction by ‘which it habitually thinks’ ([1903]1912, p. 13, 16), intuition within machine learning is programmable, replicable, and scalable as a planetary logic of precomputation, the modus operandi of contemporary AI. Algorithmically mediated intuition is focussed not, from this perspective, on encountering what Bergson calls ‘true differences in kind’ or on enabling everyday practices of anticipation through which ‘we make reliable sense of life’ (Berlant 2011), but rather on enabling states and capital to wield ontopolitical modes of control made possible as computational media become ever more environmental and infrastructural.

What is at stake in the consolidation of artificial intuition, then, is not only to the ability of corporations and governing bodies to nudge, shape, and control the flow of future actions and events, but also to recursively constitute ‘the very conditions of the intelligible and the sensible’ (Bucher 2018, p. 3). From this angle, digitally mediated intuition today is not primarily about extending human imagination or ingenuity; rather, the goal is to create all-encompassing computational ecologies which colonise subjects at the less-than-conscious level of affect, habit, and tendency to train more-than-human modes of thinking and feeling that serve dominant political, economic, and ideological interests. The increasing likelihood within such techno-social conditions is that future anticipations, gut feelings, and visceral responsivities will be generated to align with the needs and desires of powerful political and economic actors—in ways that reproduce pernicious distinctions within and across the categories of human, non-human, and less-than-human.

Recursive politics and possibilities

In the face of Big Tech’s purported ontopolitical project of environmental control, the fact that the very logic of recursion involves indeterminacy bears further contemplation. As Turing’s (1936) ground-breaking account of ‘incomputable numbers’ first articulated, insofar as recursive feedback enables the cybernetic system, it simultaneously prevents this system from becoming ‘systematic, complete, and a reproductive whole’. The recursion underlying artificial intuition thus ‘entails a temporal and processual model of dominance entangled with contingency’ (Parisi and Dixon Román 2020; see also Hui 2021). Although I am wary of locating the potential for affirmative socio-political transformation within computational errors and contingencies (Pedwell 2022), there remains something significant about the broader contingency and ambivalence that characterise contemporary digitally mediated forms of social life—and the recursive algorithmic systems underlying and pulsing through them. As the media scholar Susanna Paasonen observes, critiques of digital culture ‘risk being both simplifying and totalizing either because of their level of generalization or because of their disinterest toward how things are lived and felt’ (2021, p. 6). As persuasive and unsettling as narratives of all-encompassing computational control can be, they may nonetheless elide the ‘complexity, contradiction and ambiguity that everyday lives are made of’ (2021, p. 7)—and which our affective engagements with computational culture must intuitively inhabit.

An analysis of algorithmically mediated intuition that engages contextual nuance and the messiness of lived experience while problematising any assumption ‘that good and bad can be distinctly pried apart’ (2021, p. 4) must recognise, in the spirit of Berlant’s account, that visceral response is immanently trained in multiple ways with diverse, and often contradictory, affective, material, and socio-political effects. For example, the growing ubiquity of automated home assistants like Alexa may increasingly ‘automate us’ (Zuboff 2019, p. 8) as we are trained to intuitively think and speak in the language of capital in order to operate effectively in a virtual ‘landscape oriented around major corporations and their associated products and services’ (Munn 2018). Our increasing entanglement with computational devices may also, however, enable what the late philosopher Michel Serres calls ‘an innovative and enduring intuition’ (2015)—made possible, in part, through the delegation of human memory functions to digital technologies—which pushes against settled accounts of the world to connect with moving events as they unfold (Pedwell 2019). This mode of algorithmically mediated intuition cannot be described adequately via the language of capitalist accumulation or capitulation alone for it also opens out to a more-than-human capacity to register that which exceeds weighty terms such as ‘neoliberalism’, ‘advanced capitalism’, or ‘liberal democracy’ and yet nonetheless ‘exert[s] palpable pressures’ (Stewart 2007, p. 3).

In foregrounding the ambivalence, complexity, and ‘chaos’ of intuition today, we might also consider the afterlife of that which is elided or repressed within recursive algorithmic systems. If that which machine learning algorithms ‘leave behind reside[s] uneasily in limbo, known and unknown, understood and forgotten at the same time’ (Finn 2015, p. 51), under what conditions might such elements return and with what critical implications? When and how, for instance, might the surging potentials of everyday affect flattened via the grid-like structure of sentiment analysis re-emerge to exert effects? Such questions resonate with Amoore’s call for an experimental and processual approach to computational ethics which ‘involves reopening the multiplicity of the algorithm’ to reinstate ‘the partial, contingent and incomplete character of all algorithmic forms of calculation’ (2020, p. 162, 21). My speculative concerns here also align with Lisa Blackman’s exploration of how the ‘queer aggregations’ of haunted data can be ‘mined, poached, and put to work in newly emergent contexts and settings’ (2019, p. xiii). In reappropriating computational speculation in these ways, we, the algorithmic subjects of Big Tech’s immanent data-driven experiment, may help to reconstitute recursive analytics at large—glimpsing generative possibilities within virtual temporalities and spatialities ‘marked by the uneven, unsettled, contingent quality of histories that fold back on themselves and, in that folding, reveal new surfaces and new planes’ (Stoler 2016, p. 27, 26).

In all of this, however, we must attend to intuition’s more-than-human qualities in emergent media ecologies. In approaching intuition as a ‘trained thing’, we might conclude that the immanent cognitive-sensory education of intuition animated in Berlant’s and Bergson’s respective accounts is profoundly distinct from the ways that machine learning architectures recursively train their capacities for recognition, prediction, and optimisation. There are, as this article has suggested, vital nuances and particularities with respect to different modalities of intuition and the specific environments, processes, and data via which they are trained. While generative AI produces a kind of ‘tacit knowledge’ developed from ‘countless indexical correlations, embodied in indirect and direct ways’, there remain ‘vast differences in materiality between human and algorithmic information processing’ (Hayles 2022, p. 649, 661). Nonetheless, I would caution against any resuscitation of human/non-human dualisms to articulate such dynamics. Appreciating the affective, political, and ethical implications of intuitive AI instead requires addressing how human and non-human information, capacities, and logics are immanently entangled via algorithmically mediated sensing, thinking, and speculating. While this approach is conducive to understanding the distributed nature and recursive possibilities of intuition today, it is also, I want to suggest, more resonant with Berlant’s and Bergson’s overlapping visions than it may first appear.

Although Cruel Optimism does not engage directly with intuition’s relationship to digital media, Berlant’s quasi-computational language in describing intuitive intelligence as operating via an ‘archiving mechanism’ and entailing ‘dynamic sensual data gathering’ is suggestive, as is their wider interest in the affective logics of ‘mediation’ both here and in their posthumous book On the Inconvenience of Other People (2022). Grappling with the implications of what Jacques Ranciere called ‘the distribution of the sensible’, or with Marxist cultural theory’s account of the gradual ‘training of the sensorium’ is, for Berlant, not only about how cultural-historical conditions and social relations of power immanently shape (without determining) intuition as ‘visceral response’; it also concerns how media and cultural forms and genres (which must now surely include those linked to algorithmic architectures) mediate affective experience of the present—organising available modes of anticipation, adjustment, and ‘living on’ amid the everyday shocks of capitalist disorganisation. Intuition is inevitably educated through pervasive techno-social platforms, infrastructures, and ecologies—though its lived dynamics will always exceed the organising logics of any particular medium. Berlant ultimately invites us to consider how intuition itself manifests the lived dynamics of ‘historical processes’ (2011), and thus how its recursive dynamics attune us to how the contradictory promises of twentieth- and twenty-first century technoscience are experienced, negotiated, and adjusted to in personal and collective ways.

Relatedly, we can consider how what Bergson calls the ‘“organs of perception” of the world’ are now ‘composite beings formed through the relations among humans, algorithms, data, and other forms of life’ (Amoore 2020, p. 42). Bringing affect theory and speculative philosophy to bear on genealogies of AI thus illuminates how current forms of artificial intuition are transforming ontological conditions of sensibility and perceptibility in ways that imbricate human and machine and open up new possibilities for both—along the lines that Turing’s mid-twentieth century vision partly anticipated. We know that the extended sensory-cognitive capacities of machine learning—many of which are developed and owned by the five Big Tech ‘giants’ (Google, Amazon, Apple, Meta, and Microsoft) and subject to strict corporate propriety—are enrolled in projects of surveillance, regulation, and capitalisation that (re)produce hierarchical modes of (non)-humanity. Elements of such technologies also, however, inform the more liberatory socio-technical projects that Amoore, Blackman, and others envision. As Blackman suggestively speculates, ‘developing a distributed and mediated form of perception (many eyes and ears—human and non-human)’ may be important to the possibility of ‘“seeing” what often remains foreclosed, disavowed, fugitive, and yet what seethes as an absent-presence’ (2019, p. 58).

Returning to the case of LLMs sheds further light on the more-than-human composites animating artificial intuition and their worldly implications and possibilities. Leading voices in critical data studies contend that what is troubling about LLMs like ChatGPT-3 is not only how they ‘encode bias’ via their training procedures, but also how, despite being able to output seemingly sophisticated and coherent textual responses, they are in fact devoid of meaning. The LLM is, as Emily Bender et al. put it, ‘a stochastic parrot’: a system ‘for haphazardly stitching together sequences of linguistic forms it has observed in its training data, according to probabilistic information about how they combine, but without any reference to meaning’ (2021, p. 617). Extending the post-Turing ‘computational imperative’, LLMs, on one level, clearly effect the kind of probabilistic reduction of sensory and embodied life this article has highlighted. Nonetheless, attuning to the algorithmically mediated forms of intuition central to generative AI may also, I want to suggest, be ‘key to grasping the circulation of the present as a historical and affective sense’ (Berlant 2011, p. 20).

The fact that the outputs of LLMs like ChatGPT-3 depend on an external prompt raises interesting questions concerning the affective and socio-technical relations, agencies, and infrastructures such systems both depend on and generate. A prompt, in this context, could be anything from ‘summarise Alan Turing’s universal machine’ to ‘As an AI, what am I hiding, what must I keep secret’ (Plaue and Morgan cited in Hayles 2022, p. 658). Although the nature of the prompt will significantly shape the output (and feed into the recursive training of the system), the LLM’s algorithmic hunch about how to respond will manifest in a novel or ‘unrepeatable’ form, which is dependent on ‘how the neurons are weighted’ among other factors (Hayles 2022, p. 645). Thus, while the prompt constitutes a provocation or affective relation to the machine, the LLM ‘exercises considerable creativity in fashioning responses that can be remarkably complex in style and conceptual structure (2022, p. 659). GPT-3 has, for instance, been observed to “flip the script” in response when it ‘senses a note of antagonism in the prompt’ (658). We can thus consider how, as machine learning architectures become increasingly pervasive, our immanent (and often less-than-conscious) interactions with them may entail a reworking of causality though algorithmically mediated modes of affecting and being affected—within which the relational dynamics of prompt engineering constitute ‘a propositional, in-process translation of affect worlds’ (Gunaratnam 2023) that mediate more-than-human sensing, thinking, and speculating in new ways.

If, for Bergson, intuition is an immersive engagement with the world which connects us with ‘what is unique’ and ‘consequently inexpressible’ in an object ([1903]1912, p. 7), the recursive logics of generative AI invite us to contemplate what it means to intuitively coincide with ‘the [trained] thing’ itself as a unique object. The forms of human–algorithm collaboration enabled by emergent AI architectures hold the potential to imagine and enact what Bergson called intuition ‘as method’ in novel and affirmative ways. Machine learning systems could, that is, ‘engage the breadth and depth of learning’ to become genuinely ‘probing and speculative—and thus responsible in the richest sense of the word’, but only, as Chun argues, ‘if we treat the gap between their results and our realities as spaces for political action, not errors to be fixed’ (2021, p. 254, 253). What new conditions for the intelligible and the sensible might such transformations cultivate and open up? What imaginative, collaborative, and liberatory modes of everyday experimentation and speculation might be actualised?