Algorithms have become ubiquitous in everyday life to the extent that it is almost impossible to operate without them. In the past few years, several articles have been published in Subjectivity offering a critical evaluation of recent developments in artificial intelligence, machine learning and algorithmic governmentality, and the implications of these for culture and society, including a Special Issue on Digital Subjects (https://link.springer.com/collections/fbiaadacic) in March 2019.

In this issue we explore the entanglement of algorithms in the lifeworld. How do algorithms reflect and represent society and culture? Does the literature on subjectivity help us understand what cultural assumptions may be inscribed in algorithms, and how they got there? What kind of social agency is represented by algorithms? How do people make sense of their engagement with algorithms, what do they imagine the algorithms to be/to be doing? And, conversely, how do algorithms make sense of, form and produce them? What can be said about the broader psychosocial implications of algorithms?

The popular notion of artificial intelligence is where computers perform clever tasks. We typically overlook the human effort and ingenuity that has gone into this performance—thus when a computer beats a human grandmaster at chess, it doesn’t stand modestly on the platform thanking the rest of the team. Anticipating human anxiety about competition from computers, Alan Turing imagined a test that would determine whether an interaction was with a human or a computer: he called it the Imitation Game, we now call it the Turing Test. His first example was to ask a computer to write poetry—specifically a sonnet on the subject of the Forth Bridge. And his idea of a plausible answer for the computer was to say: “Count me out on this one. I never could write poetry” (Turing 1950).

At the time of writing this, a chatbot called ChatGPT has attracted a lot of attention as an example of artificial intelligence, and perhaps many people have tested ChatGPT with exactly the same question that Turing imagined. When Jessica Riskin tried it, she was not impressed by its efforts. She found Turing’s imaginary machine’s answer (Turing imitating a machine imitating a human) infinitely more persuasive (as indicator of intelligence) than ChatGPT’s. “Turing’s imagined intelligent machine gives off an unmistakable aura of individual personhood, even of charm” (Riskin 2023).

In an earlier article, Riskin described a mechanical automaton that attracted large admiring crowds in eighteenth century Paris. This was a generative pretrained transformer in the shape of a duck, which appeared to convert pellets of food into pellets of excrement. The inventor “is careful to say that he wants to show, not just a machine, but a process. But he is equally careful to say that this process is only a partial imitation” (Riskin 2003).

But let’s turn this thinking around. What does everyday human intelligence look like nowadays, when it seems to be impossible to perform any cognitive task without the aid of a computer or smartphone connected to the internet, without some form of algorithmic mediation? A number of writers on algorithms have explored the entanglement between humans and technical systems, often invoking the concept of recursivity. This concept has been variously defined in terms of co-production (Hayles 1999), second-order cybernetics and autopoiesis (Clarke 2017), and “being outside of itself (ekstasis), which recursively extends to the indefinite” (Hui 2021). Louise Amoore argues that, “in every singular action of an apparently autonomous system … resides a multiplicity of human and algorithmic judgements, assumptions, thresholds, and probabilities” (Amoore 2020).

The articles in this special collection explore this entanglement from several different angles. In the first article “Intuition as a Trained Thing” (https://link.springer.com/article/10.1057/s41286-023-00170-x), Carolyn Pedwell traces the place of intuition in reasoning, drawing on a wide range of disciplines from psychology and decision theory to the philosophy of mathematics, and shows how this is incorporated into algorithmic reasoning (Pedwell 2023).

There are conflicting notions of intuition within mathematics (Poincaré 1905). In her article, Pedwell discusses L.E.J. Brouwer, who extended Poincaré’s critique of classical mathematical logic and developed a much more austere constructivist or “intuitionistic” logic, limiting mathematical proof to those concepts and arguments that could be constructed mentally. Among other things, this means abandoning the law of excluded middle (Dalen 2012).

Another entirely separate line of attack concerns the use of intuition in generating new ideas, or in solving problems. The Hungarian mathematician George Pólya is known for promoting the teaching of heuristics as systematic methods for mathematical discovery and invention. “Let us teach proving by all means, but let us also teach guessing” (Pólya 1963). Pedwell quotes R.L. Wilder’s version of this idea: “Intuition, as used by the modern mathematician, means an accumulation of attitudes (including beliefs and opinions) derived from experience, both individual and cultural” (Wilder 1967).

A third thread concerns the possibility of using intuition to supplement rigorous formal proof – and indeed the necessity of this following Kurt Gödel’s work on recursivity and incompleteness. A version of this idea can be found in Alan Turing’s PhD thesis, where he says “In pre-Gödel times it was thought by some … that all the intuitive judgments of mathematics could be replaced by a finite number of these rules. The necessity for intuition would then be entirely eliminated” (Turing 1939). In his response to Turing, the philosopher J.R. Lucas argued that Gödel’s incompleteness theorem proved that minds cannot be explained as machines. “We can (or shall be able to one day) build machines capable of reproducing bits of mind-like behaviour, and indeed of outdoing the performances of human minds: but however good the machine is …it always has this one weakness. … The Gödelian formula is the Achilles heel of the cybernetical machine” (Lucas 1961). This argument also supports Pedwell’s observation: “there is always a remainder which resists translation into computational form”.Footnote 1

Pedwell also notes the significance of intuition within business administration, drawing on a key paper by Herbert Simon on the role of intuition and emotion in management decision-making. Simon’s view of intuition as based on pattern recognition fits with his notions of intelligence as largely concerned with decision-making. As Evgeny Morozov notes, “many critics have pointed out that intelligence is not just about pattern-matching. Equally important is the ability to draw generalisations” (Morozov 2023). For example, Bernard Stiegler and his collaborators invoke Poincaré in their criticism of Simon’s influence over the whole field of computing and artificial intelligence, especially “the dominant view in the cognitive sciences … that intelligence is information processing” (Stiegler et al. 2021, p. 49).

One form of information processing practised by algorithms, and supported by what Pedwell calls algorithmic intuition, is a form of targeting known (perhaps misleadingly) as personalization. Essentially this means sorting us out, classifying us into increasingly precise categories for various purposes. Sophie Day and Celia Lury have described this as a mode of what Simondon called collective individuation (Lury and Day 2019; Day et al. 2023). Pedwell explains the limitations of this mode of intuition as compared to Henri Bergson’s notion, “which seeks to achieve precision through connecting with what is unique in an object”. She also discusses Lauren Berlant’s version of the pattern recognition notion of intuition as a theme within English literature. Hence Pedwell’s argument that we can see artificial intuition “as a generative, experimental, and speculative mode of algorithmic pattern recognition that entangles human and machinic propensities” (Pedwell 2023) and therefore as a (potentially inhuman) “technology of anticipation, pre-emption, and prehension” (Pedwell 2022).

According to Deleuze, “the only subjectivity is time, non-chronological time grasped in its foundation, and it is we who are internal to time, not the other way round” (Deleuze 1989): Bert Olivier shows how this conception of temporality both builds on and departs from Kant and Bergson (Olivier 2016). Pedwell has also noted Bergson’s interest in temporality and mobility, which “as well as the non-representational thrust of his approach, resonates with the contemporary ‘turn to affect’”, and argues that “humans and algorithms engage in radically different operations across divergent temporalities and spatialities, which nonetheless interact to produce particular worldly possibilities and outcomes” (Pedwell 2022). This brings us on to the second article in this Special Collection, “Ashes to Ashes, Digit to Digit: The Nonhuman Temporality of Facebook’s Feed” (https://link.springer.com/article/10.1057/s41286-023-00173-8), by Talha İşsevenler, which looks at the rhythm or pulse of the attention economy, and how this has developed from the age of television to the age of social media, looking particularly at Facebook’s Feed. Until February 2022 this was known as News Feed; at the time, Amanda Silberling suggested that this renaming “could be read as an attempt to separate Facebook from its reputation as a hub of misinformation — they’ve quite literally taken the news out of the news feed” (Silberling 2022). It also implies a movement away from the specific temporality of rolling news, and a further blurring of any distinction between current affairs, entertainment, and interactions with “friends”. İşsevenler develops a sociological genealogy of data circulation and production of temporality, referencing disciplines from anthropology (Nancy Munn, Hirokazu Miyazaki) to media theory (Raymond Williams, Richard Dienst). Writing in 1994, Dienst had reflected the changes in media and technology between the 1970s and the 1990s, which appeared to give the viewer an active role in controlling their consumption of televisual flux, but this account was already looking problematic by the early 2000s, as noted by Patricia Clough and others. Meanwhile, Miyazaki had explored the anxieties provoked by what he called “temporal incongruity” (Miyazaki 2003). İşsevenler draws on the work of more recent thinkers, including Rebecca Coleman, Wolfgang Ernst and Bernard Stiegler, to bring the analysis into the modern world of social media algorithms (İşsevenler 2023).

In May 2009, Kevin Bankston, then a senior staff attorney at the Electronic Frontier Foundation, now working for Facebook, told reporters that “Google knows more about you than your mother” (Mitchell 2009). By 2010, this narrative was being repeated by Eric Schmidt, then CEO of Google. For example, during an industry keynote speech in Berlin, he said “We know where you are, we know what you like” (Tsotsis 2010). He also made similar statements in interviews that year, including one with the Wall Street Journal.

Around that time, Siva Vaidhyanathan wrote a book called The Googlization of Everything (2011), asking (among other things) “What does the world look like through the lens of Google?” In his review of this book, entitled “It Knows”, the editor of the London Review of Books explained how Google’s strategy involved a win–win feedback loop of information and money. “The more data it gathers, the more it knows, the better it gets at what it does. Of course, the better it gets at what it does the more money it makes, and the more money it makes the more data it gathers and the better it gets at what it does. …There is no obvious end to the process” (Soar 2011).

In 2014, having just joined Google as Director of Engineering, the futurist Ray Kurzweil told Carol Cadwalladr that “Google … will know you better than your intimate partner does. Better, perhaps, than even yourself” (Cadwalladr 2014). There is a subtle but important shift in the way these statements are framed. Bankston and Schmidt express the power of Google and the other platforms in terms of information—facts about your location, inferences about your tastes. Your mother may remember what you liked to eat when you were a child, but Amazon Fresh knows what groceries you ordered yesterday. Whereas for Kurzweil, it’s not just Google knowing things about you, it’s about Google knowing you yourself—perhaps not right now, but at some point in the future.

This narrative has caught the public imagination, and was actively encouraged by Google and other platforms including Facebook—at least until their business model started to be threatened by privacy legislation. For many years, most people weren’t particularly bothered by the growing monopoly power of Google. Google executives boasted about the vast wealth of data it controlled, because it was an essential part of its pitch to the advertisers that provided most of its revenue. But more recently an increasing number of people have expressed concerns about the use and abuse of this data-wealth—not just for advertising but for various forms of governance and biopower.

Two of the papers in this issue explore this narrative from different angles. In “Better Than We Know Ourselves” (https://doi.org/10.1057/s41286-023-00174-7), Liran Razinsky looks at how the “Google Knows” myth has become received wisdom in the popular press, and challenges the way the myth appears to conflate different kinds of knowledge, from algorithmic cognition to personal introspection, while in their paper on “Subjectivity and Algorithmic Imaginaries” (https://link.springer.com/article/10.1057/s41286-023-00171-w), Alesandro Gandini, Alesandro Gerosa, Luca Giuffrè and Silvia Keeling look at how these perceptions of algorithmic knowledge are embedded in our ways of thinking about the algorithms themselves.

There are many important differences between the data and information that is collected and mobilized by Google (algorithmic cognition) and the self-knowledge that is possessed by the individual (introspection). For Razinsky, the most important difference concerns subjectivity itself. He quotes Judith Butler’s statement that our subjectivity is constituted by a capacity for reflective self-relation or reflexivity, and draws on Freud’s idea that the knowledge available to the conscious mind is incomplete. Hence Foucault’s idea that “subjectivity is the experience of displacement; paradoxically it is the feeling of not being completely one's self” (Reigeluth 2017). Razinsky also mentions the intersubjective knowledge that other people may have of a person—he cites narcissism, which can sometimes be recognized by everyone except the person themself.

According to Eran Fisher, “the performative knowledge about the self, created through big data and algorithms, is a-theoretical, almost intently anti-theoretical. It is a regime of truth that does not purport to offer a causal theory of why individuals behave in a certain way, but rather offers an algorithmic discovery of how they behave, their data patterns” (Fisher 2020). However, Razinsky dismisses as fantasy the common idea that because algorithmic knowledge works on data it is somehow completely objective and reliable (Razinsky 2023). At an industry conference in 2016, someone tried unsuccessfully to explain the problem of induction and biased reasoning to Sebastian Thrun, founder of Google X. Thrun’s reply denied the existence of this problem, and appealed to the notion of objective truth. “Statistically what the machines do pick up are patterns and sometimes we don’t like these patterns. … When we apply machine learning methods sometimes the truth we learn really surprises us, to be honest, and I think it’s good to have a dialogue about this” (Tiku 2016).

More recently however, the belief in neutrality and objectivity has been widely challenged, notably by Cathy O’Neil’s book Weapons of Math Destruction (2016). I also discuss questions of algorithmic performativity and bias in my Subjectivity review on the Sociology of Algorithms (https://link.springer.com/article/10.1057/s41286-022-00131-w) (Veryard 2022).

Gandini, Gerosa, Giuffrè and Keeling have conducted an empirical study of how these fantasies work out in practice. Using Taina Bucher’s notion of algorithmic imaginaries, which she has defined as “ways of thinking about what algorithms are, what they should be and how they function” (Bucher 2016), they have explored the beliefs and practices of internet users. They also use a notion of “othering” taken from post-colonial theory, which allows them to explore the perceived power structures embedded in the user-algorithm relationship, as well as how users position the algorithm in either anthropomorphic or mechanistic terms.Footnote 2

I noted earlier the notion of personalization, which can lead us to believe that the algorithm is giving us something special—“For You”. On the other hand, there is a naïve belief that these algorithms do not discriminate between us, and can be trusted to give everyone the same information or advice. For example, the best possible price or the best possible route to the airport. There is a contradiction between these two perceptions of the algorithm. Some of the participants in the Gandini et al. study clearly demonstrate awareness of the commercial context of algorithmic personalization, as well as the partial and polarized nature of the content provided. For example: “I imagine an algorithm's goal is to achieve economic results, so they have a totally different logic from offering good quality information” (Gandini et al. 2023).

The final two papers in the Special Collection look at algorithms from the perspective of workers in the platform economy. In their article “Weaving the algorithm” (https://link.springer.com/article/10.1057/s41286-023-00167-6), Diego Allen-Perkins and Montserrat Cañedo-Rodríguez explore participatory subjectivities amongst food delivery riders in Madrid, providing valuable empirical evidence to aid our understanding of algorithmic governance over the workforce (Allen-Perkins and Cañedo-Rodríguez 2023). Among other things, their findings appear to support Jamie Woodcock’s argument about the limitations of algorithmic management, and the idea that Fordist control of the workforce may be less comprehensive than is sometimes imagined (Woodcock 2020, 2021). Meanwhile, in his article on “Abstract Socialities” (https://link.springer.com/article/10.1057/s41286-023-00148-9), Selim Gokce Atici looks at workforce issues from the other side—the precarious work of data scientists in a digital advertising agency in Turkey.

In his book on Algorithmic Desire, Matthew Flisfeder reminds us that algorithms are “built and designed by human actors, actors caught in the class struggle, actors who are themselves desiring subjects” (Flisfeder 2021, p. 126). Atici’s article explores the specific conditions of labouring as experienced by digital workers in Turkey, “constituted in culturally specific ways that separate workers cognitively from the fruits of their labour, produce new ways to perceive social relations, and reproduce certain entrepreneurial and disciplinary visions” (Atici 2023). He notes how the fragmentation of the work contributes to the alienation of the workforce, citing Antoinette Rouvroy on how “the meaning-making processes of transcription or representation, institutionalisation, convention and symbolisation” are shifted from human actors to devices with what she calls “real time operationality” (Rouvroy 2013), and he shows how this leads to the “ontological erasure of human actors” and “obfuscates human subjectivity”.

While many of the articles in this collection demonstrate the importance of distinguishing algorithmic knowledge and agency from human knowledge and agency, we also need to understand how they come back together. Allen-Perkins and Cañedo-Rodríguez explicitly frame this question in terms of a ‘recursive loop’ between the calculations of the algorithm and the riders' own self-reflection, arguing that this can yield “flexible patterns of thought and action”, and looking how algorithmic mediation comes into the participatory subjectivity of the delivery riders. Therefore bringing us back to the overall topic of recursivity and the entanglement between human and technical systems.

Entanglement not just for individual humans but humanity as a whole. In his 2021 documentary Can’t Get You Out Of My Head, Adam Curtis describes as “one of the most powerful mythologies of our age” the idea that “the world is too complicated for us as human beings to understand, but nothing is too complicated for the machines and the data, for they can see the hidden reality under the surface” (Curtis 2021; Utterson 2023). Curtis looks at recent experiments in algorithmic governance in China and elsewhere, he challenges the idea that our subjectivity can nowadays be accounted for simply in terms of algorithmic nudging and manipulation, and he ends with a quote from David Graeber: “the ultimate, hidden truth of the world is that it is something we make, and could just as easily make differently” (Graeber 2009, p. 514).

Algorithms and the Everyday is a broad and rapidly changing topic. There are many more angles that could be addressed, and new ideas and experiences emerging that will require critical attention. The Editors of Subjectivity have therefore agreed to keep this collection open for further submissions, and we look forward to extending it in future. Please let us have your thoughts about other ways of looking at this topic.