Avoid common mistakes on your manuscript.
Actually existing urban AI (and other forms of data-centric automation) are, for the most part, quite mundane. Everyday applications of AI rest on a deeper layer of datafication—not only of buildings and lampposts, ID cards and transit systems—but also, and more fundamentally, of the daily practices, relationships, and movements of the people who dwell in and move around our cities and towns. Current and future developments in urban AI, then, depend on as well as shape what my co-authors Kath Albury, Anthony MCosker and Rowan Wilken and I call “everyday data cultures” in our new 2022 Polity Press book. In what follows, I draw heavily on the ideas from that book.
Over the past decade, the field of critical data studies has emerged at the intersections of digital sociology, cultural studies, and internet studies to deal with the politics of datafication, as well as the forms of inequality and injustice that can result from data-driven automated decision-making and AI systems. Within computer science, these critiques have been mirrored by moves to address the technical aspects of issues around data bias and algorithmic fairness, transparency, and explainability in AI.
But more recently, the public conversation has taken a fairly sharp turn towards what we might call ‘Big Critique’. By this, we mean writing marked by a sense of urgency expressed in polemical terms, often promoting apparently novel conceptual frameworks that address large-scale, whole-of-society (or whole-of-planet) concerns, and set up principally against the unprecedented power of the world’s biggest tech companies (see, for example Crawford 2021).
One risk with Big Critique is that it can end up amplifying popular myths about the power of technology. The most obvious example of this tendency is Shoshanna Zuboff’s 2019 epic work The Age of Surveillance Capitalism. In the book, Zuboff sounds the alarm about a new, aberrant form of capitalism that takes the data traces (or, as Zuboff would have it, “digital exhaust”) of our everyday lives and converts them into predictive analytics, and increasingly, manipulative behavioural targeting. In her account, the tech companies appear all-powerful; meanwhile, we ordinary citizens or consumers are cast as their unconscious subjects—and that, of course, is exactly how the agents of Zuboff’s “surveillance capitalism” want to see us.
While Zuboff’s book may be a polemical outlier, the narrative tropes it relies on have a long history, and they show up in even the more apparently level-headed critiques of AI. Whether framed positively or negatively, the idea of revolutionary moments of technological change (from the railroad to electricity and the internet) is fundamental to the very idea of colonial nationhood, in the US and elsewhere (including in Australia). Indeed, the AI moment is the latest iteration of what David Nye (1996) called the “technological sublime”; a cultural framework where AI-based technologies are destined to progress to awe-inspiring feats of innovation, floating free above race, gender and sexuality. Technoculture’s products are represented as quasi-magical, frictionless devices; in the US context, its white male CEOs, from Steve Jobs to Elon Musk, are like gods—phallic spacecraft and all.
Big Critique heroically fights rhetorical fire with fire, revealing the dark toxicity at the heart of Big Tech. But in doing so, it plays into the quasi-religious tropes of the technological sublime, positioning technology at the centre of moral battles between good and evil. And so Big Critique’s dominant framework of hype vs counter-hype primarily serves the interests of tech companies, by connecting to a sense of drama and urgency around an always-impending future tech revolution, and by acting as if AI is able to do what it claims to. If it does not escape this trap, even the most careful, scholarly work that aims to respond critically to datafication can end up centring the large technology companies and the State, leaving ordinary people out of the picture.
Meanwhile, individuals, communities and organisations of all kinds are going about their lives and work amid constant technological change, grappling with, anxious about, or simply uninterested in the possibilities, risks, and challenges of data and automation. Bearing the trap represented by the technological sublime in mind, I suggest we take heed of Sonia Livingstone’s (2019) characterisation of the present moment as a “heady climate”, one in which “cautious calls to gather evidence about people’s lives are easily missed in the urgent rush to describe our coming predicament.”
In Everyday Data Cultures, my co-authors and I aim to contribute to this effort. Drawing on past and ongoing empirical and participatory work with communities and households as they make do with new technologies in their lives, we learn from the practical solutions cobbled together by suburban families; the ways that queer intimacies provide joyful and caring models of selfhood and relationships in digital spaces; and the ways that abusers and their toxic subcultures can exploit the affordances of data-intensive machines in harmful ways.
In the interaction between everyday life’s mundane and meaning-making practices and the data operations of various kinds of AI, there are, as Raymond Williams might have put it, ‘resources of hope’ for more inclusive, creative, and ethical AI futures; resources that can be put to good use by social enterprise, community organisations, artists—and even academics—in practical initiatives that are quietly grounded in everyday experiences, practices, and needs—beyond the heroics of Big Critique.
References
Crawford K (2021) Atlas of AI. Yale University Press, New Haven
Livingstone S (2019) Audiences in an age of datafication: critical questions for media research. Television & New Media 20(2):170–183. https://doi.org/10.1177/1527476418811118
Nye DE (1996) American Technological Sublime. MIT Press, Cambridge, MA
Curmudgeon Corner
Curmudgeon Corner is a short opinionated column on trends in technology, arts, science and society, commenting on issues of concern to the research community and wider society. Whilst the drive for super-human intelligence promotes potential benefits to wider society, it also raises deep concerns of existential risk, thereby highlighting the need for an ongoing conversation between technology and society. At the core of Curmudgeon concern is the question: What is it to be human in the age of the AI machine? -Editor.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Burgess, J. Everyday data cultures: beyond Big Critique and the technological sublime. AI & Soc 38, 1243–1244 (2023). https://doi.org/10.1007/s00146-022-01503-1
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00146-022-01503-1