Algorithms have become ubiquitous in everyday life to the extent that it is almost impossible to operate without them. In the past few years, several articles have been published in Subjectivity offering a critical evaluation of recent developments in artificial intelligence, machine learning and algorithmic governmentality, and the implications of these for culture and society, including a Special Issue on Digital Subjects (https://link.springer.com/collections/fbiaadacic) in March 2019.
In this issue we explore the entanglement of algorithms in the lifeworld. How do algorithms reflect and represent society and culture? Does the literature on subjectivity help us understand what cultural assumptions may be inscribed in algorithms, and how they got there? What kind of social agency is represented by algorithms? How do people make sense of their engagement with algorithms, what do they imagine the algorithms to be/to be doing? And, conversely, how do algorithms make sense of, form and produce them? What can be said about the broader psychosocial implications of algorithms?
The popular notion of artificial intelligence is where computers perform clever tasks. We typically overlook the human effort and ingenuity that has gone into this performance—thus when a computer beats a human grandmaster at chess, it doesn’t stand modestly on the platform thanking the rest of the team. Anticipating human anxiety about competition from computers, Alan Turing imagined a test that would determine whether an interaction was with a human or a computer: he called it the Imitation Game, we now call it the Turing Test. His first example was to ask a computer to write poetry—specifically a sonnet on the subject of the Forth Bridge. And his idea of a plausible answer for the computer was to say: “Count me out on this one. I never could write poetry” (Turing 1950).
At the time of writing this, a chatbot called ChatGPT has attracted a lot of attention as an example of artificial intelligence, and perhaps many people have tested ChatGPT with exactly the same question that Turing imagined. When Jessica Riskin tried it, she was not impressed by its efforts. She found Turing’s imaginary machine’s answer (Turing imitating a machine imitating a human) infinitely more persuasive (as indicator of intelligence) than ChatGPT’s. “Turing’s imagined intelligent machine gives off an unmistakable aura of individual personhood, even of charm” (Riskin 2023).
In an earlier article, Riskin described a mechanical automaton that attracted large admiring crowds in eighteenth century Paris. This was a generative pretrained transformer in the shape of a duck, which appeared to convert pellets of food into pellets of excrement. The inventor “is careful to say that he wants to show, not just a machine, but a process. But he is equally careful to say that this process is only a partial imitation” (Riskin 2003).
But let’s turn this thinking around. What does everyday human intelligence look like nowadays, when it seems to be impossible to perform any cognitive task without the aid of a computer or smartphone connected to the internet, without some form of algorithmic mediation? A number of writers on algorithms have explored the entanglement between humans and technical systems, often invoking the concept of recursivity. This concept has been variously defined in terms of co-production (Hayles 1999), second-order cybernetics and autopoiesis (Clarke 2017), and “being outside of itself (ekstasis), which recursively extends to the indefinite” (Hui 2021). Louise Amoore argues that, “in every singular action of an apparently autonomous system … resides a multiplicity of human and algorithmic judgements, assumptions, thresholds, and probabilities” (Amoore 2020).
The articles in this special collection explore this entanglement from several different angles. In the first article “Intuition as a Trained Thing” (https://link.springer.com/article/10.1057/s41286-023-00170-x), Carolyn Pedwell traces the place of intuition in reasoning, drawing on a wide range of disciplines from psychology and decision theory to the philosophy of mathematics, and shows how this is incorporated into algorithmic reasoning (Pedwell 2023).
There are conflicting notions of intuition within mathematics (Poincaré 1905). In her article, Pedwell discusses L.E.J. Brouwer, who extended Poincaré’s critique of classical mathematical logic and developed a much more austere constructivist or “intuitionistic” logic, limiting mathematical proof to those concepts and arguments that could be constructed mentally. Among other things, this means abandoning the law of excluded middle (Dalen 2012).
Another entirely separate line of attack concerns the use of intuition in generating new ideas, or in solving problems. The Hungarian mathematician George Pólya is known for promoting the teaching of heuristics as systematic methods for mathematical discovery and invention. “Let us teach proving by all means, but let us also teach guessing” (Pólya 1963). Pedwell quotes R.L. Wilder’s version of this idea: “Intuition, as used by the modern mathematician, means an accumulation of attitudes (including beliefs and opinions) derived from experience, both individual and cultural” (Wilder 1967).
A third thread concerns the possibility of using intuition to supplement rigorous formal proof – and indeed the necessity of this following Kurt Gödel’s work on recursivity and incompleteness. A version of this idea can be found in Alan Turing’s PhD thesis, where he says “In pre-Gödel times it was thought by some … that all the intuitive judgments of mathematics could be replaced by a finite number of these rules. The necessity for intuition would then be entirely eliminated” (Turing 1939). In his response to Turing, the philosopher J.R. Lucas argued that Gödel’s incompleteness theorem proved that minds cannot be explained as machines. “We can (or shall be able to one day) build machines capable of reproducing bits of mind-like behaviour, and indeed of outdoing the performances of human minds: but however good the machine is …it always has this one weakness. … The Gödelian formula is the Achilles heel of the cybernetical machine” (Lucas 1961). This argument also supports Pedwell’s observation: “there is always a remainder which resists translation into computational form”.Footnote 1
Pedwell also notes the significance of intuition within business administration, drawing on a key paper by Herbert Simon on the role of intuition and emotion in management decision-making. Simon’s view of intuition as based on pattern recognition fits with his notions of intelligence as largely concerned with decision-making. As Evgeny Morozov notes, “many critics have pointed out that intelligence is not just about pattern-matching. Equally important is the ability to draw generalisations” (Morozov 2023). For example, Bernard Stiegler and his collaborators invoke Poincaré in their criticism of Simon’s influence over the whole field of computing and artificial intelligence, especially “the dominant view in the cognitive sciences … that intelligence is information processing” (Stiegler et al. 2021, p. 49).
One form of information processing practised by algorithms, and supported by what Pedwell calls algorithmic intuition, is a form of targeting known (perhaps misleadingly) as personalization. Essentially this means sorting us out, classifying us into increasingly precise categories for various purposes. Sophie Day and Celia Lury have described this as a mode of what Simondon called collective individuation (Lury and Day 2019; Day et al. 2023). Pedwell explains the limitations of this mode of intuition as compared to Henri Bergson’s notion, “which seeks to achieve precision through connecting with what is unique in an object”. She also discusses Lauren Berlant’s version of the pattern recognition notion of intuition as a theme within English literature. Hence Pedwell’s argument that we can see artificial intuition “as a generative, experimental, and speculative mode of algorithmic pattern recognition that entangles human and machinic propensities” (Pedwell 2023) and therefore as a (potentially inhuman) “technology of anticipation, pre-emption, and prehension” (Pedwell 2022).
According to Deleuze, “the only subjectivity is time, non-chronological time grasped in its foundation, and it is we who are internal to time, not the other way round” (Deleuze 1989): Bert Olivier shows how this conception of temporality both builds on and departs from Kant and Bergson (Olivier 2016). Pedwell has also noted Bergson’s interest in temporality and mobility, which “as well as the non-representational thrust of his approach, resonates with the contemporary ‘turn to affect’”, and argues that “humans and algorithms engage in radically different operations across divergent temporalities and spatialities, which nonetheless interact to produce particular worldly possibilities and outcomes” (Pedwell 2022). This brings us on to the second article in this Special Collection, “Ashes to Ashes, Digit to Digit: The Nonhuman Temporality of Facebook’s Feed” (https://link.springer.com/article/10.1057/s41286-023-00173-8), by Talha İşsevenler, which looks at the rhythm or pulse of the attention economy, and how this has developed from the age of television to the age of social media, looking particularly at Facebook’s Feed. Until February 2022 this was known as News Feed; at the time, Amanda Silberling suggested that this renaming “could be read as an attempt to separate Facebook from its reputation as a hub of misinformation — they’ve quite literally taken the news out of the news feed” (Silberling 2022). It also implies a movement away from the specific temporality of rolling news, and a further blurring of any distinction between current affairs, entertainment, and interactions with “friends”. İşsevenler develops a sociological genealogy of data circulation and production of temporality, referencing disciplines from anthropology (Nancy Munn, Hirokazu Miyazaki) to media theory (Raymond Williams, Richard Dienst). Writing in 1994, Dienst had reflected the changes in media and technology between the 1970s and the 1990s, which appeared to give the viewer an active role in controlling their consumption of televisual flux, but this account was already looking problematic by the early 2000s, as noted by Patricia Clough and others. Meanwhile, Miyazaki had explored the anxieties provoked by what he called “temporal incongruity” (Miyazaki 2003). İşsevenler draws on the work of more recent thinkers, including Rebecca Coleman, Wolfgang Ernst and Bernard Stiegler, to bring the analysis into the modern world of social media algorithms (İşsevenler 2023).
In May 2009, Kevin Bankston, then a senior staff attorney at the Electronic Frontier Foundation, now working for Facebook, told reporters that “Google knows more about you than your mother” (Mitchell 2009). By 2010, this narrative was being repeated by Eric Schmidt, then CEO of Google. For example, during an industry keynote speech in Berlin, he said “We know where you are, we know what you like” (Tsotsis 2010). He also made similar statements in interviews that year, including one with the Wall Street Journal.
Around that time, Siva Vaidhyanathan wrote a book called The Googlization of Everything (2011), asking (among other things) “What does the world look like through the lens of Google?” In his review of this book, entitled “It Knows”, the editor of the London Review of Books explained how Google’s strategy involved a win–win feedback loop of information and money. “The more data it gathers, the more it knows, the better it gets at what it does. Of course, the better it gets at what it does the more money it makes, and the more money it makes the more data it gathers and the better it gets at what it does. …There is no obvious end to the process” (Soar 2011).
In 2014, having just joined Google as Director of Engineering, the futurist Ray Kurzweil told Carol Cadwalladr that “Google … will know you better than your intimate partner does. Better, perhaps, than even yourself” (Cadwalladr 2014). There is a subtle but important shift in the way these statements are framed. Bankston and Schmidt express the power of Google and the other platforms in terms of information—facts about your location, inferences about your tastes. Your mother may remember what you liked to eat when you were a child, but Amazon Fresh knows what groceries you ordered yesterday. Whereas for Kurzweil, it’s not just Google knowing things about you, it’s about Google knowing you yourself—perhaps not right now, but at some point in the future.
This narrative has caught the public imagination, and was actively encouraged by Google and other platforms including Facebook—at least until their business model started to be threatened by privacy legislation. For many years, most people weren’t particularly bothered by the growing monopoly power of Google. Google executives boasted about the vast wealth of data it controlled, because it was an essential part of its pitch to the advertisers that provided most of its revenue. But more recently an increasing number of people have expressed concerns about the use and abuse of this data-wealth—not just for advertising but for various forms of governance and biopower.
Two of the papers in this issue explore this narrative from different angles. In “Better Than We Know Ourselves” (https://doi.org/10.1057/s41286-023-00174-7), Liran Razinsky looks at how the “Google Knows” myth has become received wisdom in the popular press, and challenges the way the myth appears to conflate different kinds of knowledge, from algorithmic cognition to personal introspection, while in their paper on “Subjectivity and Algorithmic Imaginaries” (https://link.springer.com/article/10.1057/s41286-023-00171-w), Alesandro Gandini, Alesandro Gerosa, Luca Giuffrè and Silvia Keeling look at how these perceptions of algorithmic knowledge are embedded in our ways of thinking about the algorithms themselves.
There are many important differences between the data and information that is collected and mobilized by Google (algorithmic cognition) and the self-knowledge that is possessed by the individual (introspection). For Razinsky, the most important difference concerns subjectivity itself. He quotes Judith Butler’s statement that our subjectivity is constituted by a capacity for reflective self-relation or reflexivity, and draws on Freud’s idea that the knowledge available to the conscious mind is incomplete. Hence Foucault’s idea that “subjectivity is the experience of displacement; paradoxically it is the feeling of not being completely one's self” (Reigeluth 2017). Razinsky also mentions the intersubjective knowledge that other people may have of a person—he cites narcissism, which can sometimes be recognized by everyone except the person themself.
According to Eran Fisher, “the performative knowledge about the self, created through big data and algorithms, is a-theoretical, almost intently anti-theoretical. It is a regime of truth that does not purport to offer a causal theory of why individuals behave in a certain way, but rather offers an algorithmic discovery of how they behave, their data patterns” (Fisher 2020). However, Razinsky dismisses as fantasy the common idea that because algorithmic knowledge works on data it is somehow completely objective and reliable (Razinsky 2023). At an industry conference in 2016, someone tried unsuccessfully to explain the problem of induction and biased reasoning to Sebastian Thrun, founder of Google X. Thrun’s reply denied the existence of this problem, and appealed to the notion of objective truth. “Statistically what the machines do pick up are patterns and sometimes we don’t like these patterns. … When we apply machine learning methods sometimes the truth we learn really surprises us, to be honest, and I think it’s good to have a dialogue about this” (Tiku 2016).
More recently however, the belief in neutrality and objectivity has been widely challenged, notably by Cathy O’Neil’s book Weapons of Math Destruction (2016). I also discuss questions of algorithmic performativity and bias in my Subjectivity review on the Sociology of Algorithms (https://link.springer.com/article/10.1057/s41286-022-00131-w) (Veryard 2022).
Gandini, Gerosa, Giuffrè and Keeling have conducted an empirical study of how these fantasies work out in practice. Using Taina Bucher’s notion of algorithmic imaginaries, which she has defined as “ways of thinking about what algorithms are, what they should be and how they function” (Bucher 2016), they have explored the beliefs and practices of internet users. They also use a notion of “othering” taken from post-colonial theory, which allows them to explore the perceived power structures embedded in the user-algorithm relationship, as well as how users position the algorithm in either anthropomorphic or mechanistic terms.Footnote 2
I noted earlier the notion of personalization, which can lead us to believe that the algorithm is giving us something special—“For You”. On the other hand, there is a naïve belief that these algorithms do not discriminate between us, and can be trusted to give everyone the same information or advice. For example, the best possible price or the best possible route to the airport. There is a contradiction between these two perceptions of the algorithm. Some of the participants in the Gandini et al. study clearly demonstrate awareness of the commercial context of algorithmic personalization, as well as the partial and polarized nature of the content provided. For example: “I imagine an algorithm's goal is to achieve economic results, so they have a totally different logic from offering good quality information” (Gandini et al. 2023).
The final two papers in the Special Collection look at algorithms from the perspective of workers in the platform economy. In their article “Weaving the algorithm” (https://link.springer.com/article/10.1057/s41286-023-00167-6), Diego Allen-Perkins and Montserrat Cañedo-Rodríguez explore participatory subjectivities amongst food delivery riders in Madrid, providing valuable empirical evidence to aid our understanding of algorithmic governance over the workforce (Allen-Perkins and Cañedo-Rodríguez 2023). Among other things, their findings appear to support Jamie Woodcock’s argument about the limitations of algorithmic management, and the idea that Fordist control of the workforce may be less comprehensive than is sometimes imagined (Woodcock 2020, 2021). Meanwhile, in his article on “Abstract Socialities” (https://link.springer.com/article/10.1057/s41286-023-00148-9), Selim Gokce Atici looks at workforce issues from the other side—the precarious work of data scientists in a digital advertising agency in Turkey.
In his book on Algorithmic Desire, Matthew Flisfeder reminds us that algorithms are “built and designed by human actors, actors caught in the class struggle, actors who are themselves desiring subjects” (Flisfeder 2021, p. 126). Atici’s article explores the specific conditions of labouring as experienced by digital workers in Turkey, “constituted in culturally specific ways that separate workers cognitively from the fruits of their labour, produce new ways to perceive social relations, and reproduce certain entrepreneurial and disciplinary visions” (Atici 2023). He notes how the fragmentation of the work contributes to the alienation of the workforce, citing Antoinette Rouvroy on how “the meaning-making processes of transcription or representation, institutionalisation, convention and symbolisation” are shifted from human actors to devices with what she calls “real time operationality” (Rouvroy 2013), and he shows how this leads to the “ontological erasure of human actors” and “obfuscates human subjectivity”.
While many of the articles in this collection demonstrate the importance of distinguishing algorithmic knowledge and agency from human knowledge and agency, we also need to understand how they come back together. Allen-Perkins and Cañedo-Rodríguez explicitly frame this question in terms of a ‘recursive loop’ between the calculations of the algorithm and the riders' own self-reflection, arguing that this can yield “flexible patterns of thought and action”, and looking how algorithmic mediation comes into the participatory subjectivity of the delivery riders. Therefore bringing us back to the overall topic of recursivity and the entanglement between human and technical systems.
Entanglement not just for individual humans but humanity as a whole. In his 2021 documentary Can’t Get You Out Of My Head, Adam Curtis describes as “one of the most powerful mythologies of our age” the idea that “the world is too complicated for us as human beings to understand, but nothing is too complicated for the machines and the data, for they can see the hidden reality under the surface” (Curtis 2021; Utterson 2023). Curtis looks at recent experiments in algorithmic governance in China and elsewhere, he challenges the idea that our subjectivity can nowadays be accounted for simply in terms of algorithmic nudging and manipulation, and he ends with a quote from David Graeber: “the ultimate, hidden truth of the world is that it is something we make, and could just as easily make differently” (Graeber 2009, p. 514).
Algorithms and the Everyday is a broad and rapidly changing topic. There are many more angles that could be addressed, and new ideas and experiences emerging that will require critical attention. The Editors of Subjectivity have therefore agreed to keep this collection open for further submissions, and we look forward to extending it in future. Please let us have your thoughts about other ways of looking at this topic.
Notes
Other interpretations of Gödel’s incompleteness theorem are available. For example, Luciana Parisi relies on some controversial work by Gregory Chatin to argue that interactive algorithms can circumvent the algorithmic constraints of the Turing Machine (Parisi 2015).
They briefly acknowledge the use of the word “othering” in psychoanalysis, with a reference to a Lacanian paper by Bandinelli and Bandinelli (2021), but this is not explored further.
References
Allen-Perkins, D., and M. Cañedo-Rodríguez. 2023. Weaving the Algorithm: Participatory Subjectivities Amongst Food Delivery Riders. Subjectivity. https://doi.org/10.1057/s41286-023-00167-6.
Amoore, L. 2020. Cloud Ethics: Algorithms and the Attributes of Ourselves and Others. Durham: Duke University Press.
Atici, S.G. 2023. Abstract Socialities: Digital Advertising Work and Behavioural Data Modelling in Istanbul. Subjectivity. https://doi.org/10.1057/s41286-023-00148-9.
Bandinelli, C., and A. Bandinelli. 2021. What Does the App Want? A Psychoanalytic Interpretation of Dating Apps’ Libidinal Economy. Psychoanalysis, Culture and Society 26 (2): 181–198. https://doi.org/10.1057/s41282-021-00217-5.
Bucher, T. 2016. The Algorithmic Imaginary: Exploring the Ordinary Affects of Facebook Algorithms. Information, Communication and Society 20 (1): 30–44. https://doi.org/10.1080/1369118X.2016.1154086.
Cadwalladr, C. 2014, February 22. Are the Robots About to Rise? Google's New Director of Engineering Thinks So…. The Observer. https://www.theguardian.com/technology/2014/feb/22/robots-google-ray-kurzweil-terminator-singularity-artificial-intelligence.
Clarke, B. 2017. Rethinking Gaia: Stengers, Latour Margulis. Theory Culture and Society 34 (4): 3–26. https://doi.org/10.1177/0263276416686844.
Curtis, A. (Director). 2021. Can't Get You Out of My Head (Motion Picture).
Dalen, D. v. (2012). Poincaré and Brouwer on intuition and logic. NAW, 191–195.
Day, S., C. Lury, and H. Ward. 2023. Personalization: A New Political Arithmetic? Distinktion: Journal of Social Theory. https://doi.org/10.1080/1600910X.2022.2098352.
Deleuze, G. 1989. Deleuze, G. 2005. Cinema 2: The Time-Image. Trans. H. Tomlinson and R. Galeta. London: Athlone Press.
Fisher, E. 2020. Can Algorithmic Knowledge About the Self Be Critical? In The Digital Age and Its Discontents, ed. M. Stocchetti. Helsinki: Helsinki University Press. https://doi.org/10.33134/HUP-4-6.
Flisfeder, M. 2021. Algorithmic Desire: Towards a New Structuralist Theory of Social Media. Evanston: Northwestern University Press.
Gandini, A., A. Gerosa, L. Giuffrè, and S. Keeling. 2023. Subjectivity and Algorithmic Imaginaries: The Algorithmic Other. Subjectivity. https://doi.org/10.1057/s41286-023-00171-w.
Graeber, D. 2009. Direct Action: An Ethnography. Oakland: AK Press.
Hayles, N.K. 1999. The Illusion of Autonomy and the Fact of Recursivity: Virtual Ecologies, Entertainment, and “Infinite Jest.” New Literary History 30 (3): 675–697.
Hui, Y. 2021. Problems of Temporality in the Digital Epoch. In Media Infrastructures and the Politics of Digital Time, ed. A. Volmar and K. Stine, 77–87. Amsterdam: Amsterdam University Press.
İşsevenler, T. 2023. Ashes to Ashes, Digit to Digit: The Nonhuman Temporality of Facebook’s Feed. Subjectivity. https://doi.org/10.1057/s41286-023-00173-8.
Lucas, J. 1961. Minds, Machines and Gödel. Philosophy 36 (137): 112–127.
Lury, C., and S. Day. 2019. Algorithmic Personalization as a Mode of Individuation. Theory Culture and Society 36 (2): 17–37. https://doi.org/10.1177/0263276418818888.
Mitchell, R.L. 2009, May 11. What Google Knows About You. Computerworld. https://www.computerworld.com/article/2772604/what-google-knows-about-you.html.
Miyazaki, H. 2003. The Temporalities of the Market. American Anthropologist 105 (2): 255–265.
Morozov, E. 2023, March 30. The Problem with Artificial Intelligence? It’s Neither Artificial Nor Intelligent. Guardian. https://www.theguardian.com/commentisfree/2023/mar/30/artificial-intelligence-chatgpt-human-mind.
Olivier, B. 2016. Deleuze’s “crystals of time” Human Subjectivity and Social History. Phronimon. https://doi.org/10.17159/2413-3086/2016/160.
O’Neil, C. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.
Parisi, L. 2015. Instrumental Reason, Algorithmic Capitalism and the Incomputable. In Alleys of Your Mind: Augmented Intelligence and Its Traumas, ed. M. Pasquinelli, 125–137. Lüneburg: Meson Press.
Pedwell, C. 2022. Speculative Machines and Us: More-than-Human Intuition and the Algorithmic Condition. Cultural Studies. https://doi.org/10.1080/09502386.2022.2142805.
Pedwell, C. 2023. Intuition as a “trained thing”: Sensing, Thinking, and Speculating in Computational Cultures. Subjectivity. https://doi.org/10.1057/s41286-023-00170-x.
Poincaré, H. 1905. The Value of Science (La Valeur de la Science). Paris: Flammarion.
Pólya, G. 1963. On Learning, Teaching, and Learning Teaching. The American Mathematical Monthly 70 (6): 605–619.
Razinsky, L. 2023. Better than they Know Themselves? Algorithms and Subjectivity. Subjectivity. https://doi.org/10.1057/s41286-023-00174-7
Reigeluth, T. 2017. Recommender Systems as Techniques of the Self? Le Foucaldien. https://doi.org/10.16995/lefou.29.
Riskin, J. 2003. The Defecating Duck, or, the Ambiguous Origins of Artificial Life. Stanford Digital Repository. https://doi.org/10.25740/zb803xz9154.
Riskin, J. 2023, June 25. A Sort of Buzzing Inside My Head. New York Review of Books. https://www.nybooks.com/online/2023/06/25/a-sort-of-buzzing-inside-my-head/.
Rouvroy, A. 2013. The End(s) of Critique: Data-Behaviourism vs. Due-Process. In Privacy, Due Process and the Computational Turn. Philosophers of Law Meet Philosophers of Technology, ed. M. Hildebrandt and K. de Vries. New York: Routledge.
Silberling, A. (2022, February 15. Facebook Renamed Its ‘news feed’ to just ‘feed’. TechCrunch. https://techcrunch.com/2022/02/15/facebook-renamed-its-news-feed-to-just-feed/.
Soar, D. 2011, October 6. It Knows. London Review of Books 33 (19). https://www.lrb.co.uk/the-paper/v33/n19/daniel-soar/it-knows.
Stiegler, B., et al. 2021. Bifurcate: There is No Alternative. Trans. D. Ross. London: Open Humanities Press.
Tiku, N. 2016, October 24. At Vanity Fair’s Festival, Tech Can’t Stop Talking About Trump. BuzzFeed News. https://www.buzzfeednews.com/article/nitashatiku/vanity-fair-silicon-valley-donald-trump.
Tsotsis, A. 2010, September 7. Eric Schmidt: “We Know Where You Are, We Know What You Like”. TechCrunch. https://techcrunch.com/2010/09/07/eric-schmidt-ifa/.
Turing, A. 1939. Systems of Logic Based on Ordinals. Proceedings of the London Mathematical Society 2/45 (2239): 161–228.
Turing, A. 1950. Computing Machinery and Intelligence. Mind 49: 433–460.
Utterson, A. 2023. Software, Self, Society: The Computer Histories of Adam Curtis. Quarterly Review of Film and Video. https://doi.org/10.1080/10509208.2023.2247312.
Vaidhyanathan, S. (2011). The Googlization of Everything (and Why We Should Worry). University of California Press. https://doi.org/10.1525/9780520948693
Veryard, R. 2022. On the Sociology of Algorithms. Subjectivity 15: 88–91. https://doi.org/10.1057/s41286-022-00131-w.
Wilder, R. 1967. The Role of Intuition. Science 156 (3775): 605–610.
Woodcock, J. 2020. The Algorithmic Panopticon at Deliveroo: Measurement, Precarity, and the Illusion of Control. Ephemera: Theory and Politics in Organization 20 (3): 67–95.
Woodcock, J. 2021. The Limits of Algorithmic Management: On Platforms, Data, and Workers’ Struggle. South Atlantic Quarterly 120 (4): 703–713. https://doi.org/10.1215/00382876-9443266.
Author information
Authors and Affiliations
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Veryard, R. As we may think now. Subjectivity 30, 339–347 (2023). https://doi.org/10.1057/s41286-023-00175-6
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1057/s41286-023-00175-6