“What’s on your mind?”


These words, subtly greyed out in an input box on the user interface. Inviting users to write over them, replace them, respond. Pitched somewhere between an inquiring friend and a therapist, such is the gentle nudge fuelling content generation on the Facebook platform. What’s on your mind? Consider it a general declaration of the cognitive orientation of contemporary media. An underwhelming apotheosis when contrasted with the World Brains, augmented intellects or man–machine symbioses of days past.

By sharing what’s on their minds, millions of users take part in a giant, unrelenting performance of what Clay Shirky (2010) once called “cognitive surplus”. Although today, since the companies that trade in what’s on people’s minds are at the centre of our platform/data economy, perhaps surplus (in the sense of excess or left over) is no longer the word. Industries of cognitive guidance and extraction have filled the brain space that was once considered left over. Surplus has become imperative.

As readers of this journal are no doubt acutely aware, artificial intelligence and the multiple sub-strands of machine learning are currently enjoying something of a renaissance, with numerous new centres for AI-related study established in recent years and AI technology considered by government—here in the United Kingdom—as one of four “Grand Challengers” of the future of industry (HM Government 2017). There is much enthusiasm over the potentials of AI technology, as well as more cautionary voices that have clustered around the notion of ethics.

As much of the attention is on the potentials and pitfalls of new developments and new applications of AI, this special issue aims to expand the discussion and to reflect on the wider cognitive milieu within which new AI technologies are implemented. This milieu is constituted by digital media technologies and devices, their networks and infrastructures, and the thinking populations who share their thoughts and behaviours. As I hope can be gleaned from the above, this cognitive milieu is as everyday and ordinary as it is remarkable, constituted as it is by millions or billions of people’s routine activity and mediated through affordable consumer devices and commercial platform providers. It has a political economy as much as it does a science, it is a culture as much as an industry, and it has a history, or rather many histories, as much as it informs visions of the future.

The origins of the theme of ‘streams of consciousness’ in this issue lie with a conference held at the University of Warwick in 2016, titled Streams of Consciousness: Data, Cognition and Intelligent Devices. While having no formal affiliation, the conference was partly inspired by a previous small workshop held at Goldsmiths College in 2010 titled A Billion Gadget Minds (with the collected works later published as the first issue of journal Computational Culture). This workshop brought together interdisciplinary thinkers from Cognitive Science and Philosophy to Science and Technology Studies and Cultural Studies. Despite such variety, there was a shared recognition that whatever it is we call ‘thinking’ does not happen in a Cartesian vacuum; that notions like thinking, cognition, and intelligence happen with and through a range of technologies, devices and artifacts and thus need to be thought together. While there is a well-established and lively philosophical discussion about distributed, extended and embodied cognition (Clark 2008; Menary 2012; Rowlands 2013), I found the idea of placing questions of cognition more closely alongside those of media and mediation fruitful and compelling, precisely in light of the visible activities of the largest platform providers and their manifest interest in our minds.

Streams of Consciousness (the conference) aimed to offer a scaled-up version of this workshop approach, with many more speakers and a more diverse range of disciplinary and interdisciplinary voices (adding geographers, designers, anthropologists, critical theorists, sociologists, and political economists to the mix). At the invitation of AI and Society’s Editor, Professor Gill, the articles presented in this issue have been developed from presentations given at the conference. Before introducing the articles, a note on the notion of ‘streams of consciousness’ that holds the issue together.

Understandings of consciousness as temporal flow or continuum have been present in Buddhist philosophy for over a millennium via the notion of ‘Mindstream’ (and which the practice of mindfulness—of being aware of the mindstream from moment to moment—derives). In Western thought, it was the American philosopher and psychologist William James (1890) who first wrote of ‘streams of thought’ in his Principles of Psychology when contemplating how consciousness appears to the Self. “Consciousness”, James wrote,

does not appear to itself chopped up in bits … It is nothing jointed; it flows. A ‘river’ or a ‘stream’ are the metaphors by which it is most naturally described. In talking of it hereafter, let us call it the stream of thought, of consciousness, or of subjective life. (1890, p. 239)

The notion reappeared as a modernist literary device in the early 20th Century, notably in the works of Dorothy Richardson, Marcel Proust, James Joyce, Virginia Woolf, and others. In this context, it referred to the use of internal monologue, the writing down of thoughts, descriptions of feelings and other stylistic elements. More generally, in writing it has come to refer to transcribing your thoughts in an almost unconscious manner, as writing without thinking, or at least as writing without thinking about writing.

While not remaining faithful to these previous uses, the current collection takes inspiration from them in a number of ways. The idea of a stream or flow is one that zig zags the history of thinking cognition (or mind) as well as computation and more recently, data. While one must remain critical of metaphorical uses of notions such a stream and flow, it nevertheless is the case that contemporary media are not only understood through such metaphors but are actively designed so as to realise them. For example, water metaphors dominate current understandings of data (data streaming, data flows, data lakes, data pours and so on), but one can equally observe over the last 50-odd years an actual transformation in the ontology of data. Through transformations in storage and processing (e.g. the move to relational databases and transaction-based processing) and the development of decision support and other information systems, data has become something that is designed to move, to circulate around organisations and across territories and to do so routinely. Often, we expect our data systems to change or ‘refresh’ continuously when we consult them. In this sense, metaphors of flow and streaming have been made more real and can no longer be understood as purely metaphorical.

It is not just that streams refers to both data and thought, but that it captures something of the wider cognitive milieu in which thinking and data intersect with each other in a number of ways. We think with and through data, our thinking produces data and data are used for multiple other cognitive processes. The stream is always already hybrid. Notions of streams and flow therefore seem doubly appropriate to capture what happens when people share their thoughts via contemporary media, as both the thinking itself and the technical infrastructure (and data in particular) are dually invoked. Moreover, despite ongoing experiments with brain-machine interfaces (Cuthbertson 2019), the practice of writing remains absolutely central to how we use our consumer devices. In this sense, What’s on your mind? would seem to call for a form of writing that precisely resembles the streams of consciousness style of our Modernist predecessors, but now stripped of any sense of the literary avant-garde and possibly of a conscious sense of style at all: streams of consciousness is the default writing mode for the platform user.

That ‘streams of consciousness’ has philosophical, psychological and literary lineages is also fitting, and reflects the issue’s desire to entertain interdisciplinary perspectives on thought, intelligence, and cognition, as well as the desire to explore different (or ‘minor’) histories around these notions. ‘Streams of consciousness’ is an orientating device (or perhaps a disorientating or reorientating device). It is a way of thinking about the cognitive orientation of the digital present. It is an invitation to historicise, revisit and interrogate our dominant concepts (such as ‘intelligence’), to compare and contrast, and to think critically about what’s on our minds.

Both Caroline Bassett and Beatrice Fazi take historical cues as the starting points for their contributions. Bassett revisits Joseph Weizenbaum’s AI experiments at MIT in the mid-1960s and specifically his work on the computer programme ELIZA, which was designed to approximate a Rogerian therapist. Offered as a media archaeological history of the present, Basset uses this episode to reflect on the rising “therapeutic relationship between computer and human”, whereby scenarios of computers being used to aid calculation, reasoning and decision-making increasingly sit alongside ones in which computers are used to directly alter behaviour. Fazi begins her inquiry into machine thinking through a consideration of Turing’s classic question, ‘Can a machine think?’ She argues for an approach to machine thought that is “dramatically alien to human thought” before reposing Turing’s question as, ‘Can a machine think anything new?’ Such a question avoids the ‘imitation game’ of Turing’s classic question (whereby computers attempt to simulate human thought) and instead permits a consideration of the novelty of machinic thought that further accentuates the differences (rather than similarities) of human and machine thought.

Tero Karppi and Yvette Granata use a more recent case study to launch a related critique of understandings of computer intelligence as imitating or simulating human thought. Instead of ‘machine thought’ considered by Fazi, Karppi and Granata take aim at the notions of the ‘artificial’ as well as ‘intelligence’ before equally emphasising the difference and distance between human and computer cognition. They use a case involving Amazon’s cloud-based AI, Alexa, accidently ordering a dollhouse for a 6-year-old girl as the empirical basis of their inquiry. Their article also makes visible the emerging political economy around AI powered voice interfaces, and the everyday contexts in which they are deployed. Steve Fuller’s article offers a radically different consideration of human and machine thought. By way of a reappraisal of early cybernetic thinkers, Fuller invites us to consider the human brain itself as a possible artificial intelligence and ponders a future “cognitive agriculture”, where brains or brain elements are grown to bear the (extended) cognitive demands of society. While recognising possible moral objections to such brain harvesting, Fuller advances his argument partly on the basis of the brain’s cognitive efficiency vis a vis its silicon counterparts.

Tony Sampson’s contribution explores the forms of expertise that underpin the design of contemporary computational interfaces. He makes a case for a “critical HCI” (Human Computer Interaction) and offers a critical evaluation of the notion of ‘experience’ that informs the design of most consumer electronics and software applications. His analysis of experience resonates with that of the therapeutic offered by Bassett in that both notions jettison the idea that computers are used to aid reason and decision-making, or to “augment the intellect” as Engelbart once put it. Through revisiting the notion of experience in the philosophy of Whitehead, Sampson aims to develop an alternate notion of experience. Michael Wheeler’s philosophically oriented consideration of smart (AI) technologies and transparency similarly contains important lessons for interface designers. Wheeler mounts an argument about what kind of visibility, or perhaps perceivability, we should aim for with smart technologies. Rather than couching his argument in legal (privacy, rights, etc.) or economic terms (property), he does so on the basis of how such smart technologies sit in relation to extended theories of mind. Recognising the challenge that thinking done with and through tools may require the tools’ disappearance, Wheeler nevertheless calls for the identification of a “sweet spot” on the spectrum of disappearance (transparency) and appearance (or a conscious perception of tools). Like Karppi and Granata, Wheeler’s philosophical inquiry opens with a consideration of a ‘smart’ consumer technology that, upon being embedded in the skin of its owner, vibrates when facing North thereby equipping the owner with new spatial awareness.

Rounding out this issue is Nick Srnicek’s provocative analysis of how “central banks think”. Srnicek equally draws on extended and distributed theories of cognition that are central to Wheeler’s inquiry, but here they are put to work in order to help understand the role of models in central bank policy making. Srnicek moves the discussion away from smart devices and indeed from the philosophical register of Wheeler and instead places them in a more politically charged analysis of sociotechnical reasoning. This is achieved through a critical and historically informed account of the role of dynamic stochastic general equilibrium (DSGE) models in decision making.