1 Introduction

Artificial intelligence (AI) has fiscal, social, and cultural impacts whose technological paradigms can be matched to/with social paradigms to define heuristic forces that have shaped the development of all forms of automation deployed since the start of industrialism transformed the established cultural order to reflect its priorities and demands for expanding profit (Dosi 1982, pp. 151–153). Theorising links between AI and Enlightenment ideologies of agency establishes a lineage for automation that originates with the reification of philosopher Immanuel Kant’s distinctions between types of judgement (agency) that became an instrumentality justifying mechanisation and its social impacts by affirming capitalism’s progressive assertion of managerial control (reflective agency) over labour (determinative agency) in the nineteenth century. This ideology is evident beginning with the transition from hand labour to mechanisation, is amplified by the assembly line, and continues into the twenty-first century with automation and AI (“machine learning”): it is an “ideology of automation” which claims that labour and machinery are interchangeable by asserting that the separation of managerial control from productive labour as products of innate differences of class, ability, and intelligence. Differences in social status—labour’s inferiority and management’s superiority—tautologically justify both roles as reflections (and proofs) of morality demonstrated by different degrees of economic success, since the ‘disciplined pursuit of individual self-interest was conceived as a moral imperative’ and prosperity was ‘dependent on virtue’ (Lears 1981, p. 20). This argument validates the accumulation of surplus value [profit] while devaluing labour, thus permitting the dehumanisation of factory workers and the gross fiscal disparities between social classes.

The contemporary automation of intelligent, immaterial tasks discloses the fallacy of factory workers retraining into the professional class (Braverman 1974; Betancourt 2016), since shifting assembly line labour to a different setting (office work) resulted in labour performing tasks which were no less rote (Braverman 1974; Ford 2009; Susskind 2020); AI thus links the displaced labour in the industrial revolution to the fourth, digital revolution. Adopting an interdisciplinary approach to the theoretical analysis of this continuity illuminates the connections between each stage of industrialisation (Polanyi 2001), showing that the collective dreams, fantasies, and nightmares about historical challenges to established and stable beliefs, including questions of authenticity, ‘the human’, and ‘the real’ (Boddy 2004; Baym 2010) presage current anxieties about AI.

There is no break in the codependences of labour and value within capitalism (Marcuse 1955; Hobsbawm 2011). The social and fiscal determinants shaping AI enable a fantasy in which the mechanism of production (software in general and AI specifically) becomes both a commodity and a value generator because, instead of production, digital automation employs data collection to generate surplus value (Elmer 2004; Zuboff 2019) for companies such as Alphabet or Meta (Prodnik 2017), making surveillance inextricable from digital capitalism (Betancourt 2016). Claims that AI will usher in a golden age of prosperity and end scarcity repeat this historical fallacy. However, these developments that replace human labour are epiphenomena because the tasks completed by machines are not in themselves a source of value; even fully autonomous digital systems rely on human selection (agency) to generate value—the essential need/desire required by exchange (Marx 1990). Digital capitalism has responded to this inescapable horizon for surplus value by transforming the act of selection into unpaid labour via surveillance, masking the structural demand for “selective labour” (agency).

AI extends earlier separations of managerial decisions (reflective judgement) from the “mere work” performed by labourers—an affirmation of socio-cultural values that denigrate labour—thus revealing an ideological lineage from the assembly line to the office. This social paradigm parallels the technical paradigms of industrialisation. These developments reiterate/reinforce the entanglement of culture, economics, and the factory (Nemoianu 2006), but arise from implicit beliefs that guide the applications of each new innovation (Allport 1954).

2 Determinative Judgement as “Machine Learning”

The first critiques of industrialisation in the eighteenth century are moral arguments made in the arts in advance of later analyses based in economics. English poet William Blake wrote and illustrated the book Songs of Innocence and Experience in 1789, challenging mechanisation and coal-fired factories as moral harms to society. Unlike later discourses, his “arguments” persuade through poesis and metaphor. This Romantic tradition of artists and poets is instructive when considering the socio-cultural impacts of technology: historically, the arts have been among the first aspects of society to be affected by the introduction of new technology (e.g., photography). Contemporary autonomous machinery and its displacing of human agency has antecedents in the effects of photography on visual art in the nineteenth century (Kaplan 2008; Burgin 2018). Art reveals the connections of automation to a hidden social paradigm absent from other types of analysis, making this lineage of interest to a critical assessment of the social and cultural impacts of automation since it expands the horizons of analysis beyond only the economic advantages posed by industrial mechanisation and the replacement of human labour with “labour-saving devices”. Automating routine tasks is not a neutral proposition—it substantively alters the social order and role of labour (Frey 2017). The deskilling of labour, which began with the first industrial factories in the eighteenth century, accelerates with “machine learning” to replace immaterial (intellectual) labour by/with autonomous agency (Braverman 1974; Makridakis 2017; Benanav 2020). AI now automates forms of intelligent labour that seemed insurmountably complex as recently as 2004 (Levy 2004), changing the role for human agency as these systems become pervasive and invisible (Frey 2017), consequently requiring vast social and cultural adaptations as well as political and economic tranformations.

Artistic critiques of industrialisation typically pose moral arguments; however, Blake’s rejection of industrialisation became a mask for architect Adolph Loos in 1910, who argues for an abandonment of traditional “decorative” aesthetics in favour of more profitable, “undecorated”, Modern designs. Loos subtly inverts Blake’s attacks on industrialisation while employing similar moral claims to hide, justify, and vindicate what is really an economic argument for an increased surplus value. His rejection of decoration expresses beliefs about who is and is not valuable in society, linking those people with high moral value to the manufacturing of undecorated commodities which require less time and less skill to produce. He thus justifies an economic demand for profit by subsuming considerations of surplus value into aesthetic and moral arguments that cast anyone who disagrees as a degenerate or primitive unworthy of recognition or consideration. This aesthetic-cultural demand for an unconstrained use of technology to increase surplus values disguises its social and cultural impacts as benefits to society. AI parallels these arguments for the unconstrained and expansive use of technology (Mariotti 2021). Loos’ rejections of skilled, intelligent labour justify an economic goal through moral, economic, and even medical claims:

The immense damage and devastation which the revival of ornament has caused to aesthetic development could easily be overcome because nobody, not even the power of the state, can stop the evolution of humanity! It represents a crime against the national economy, and, as a result of it, human labor, money and material are ruined. Time cannot compensate for this kind of damage. Ornament is wasted manpower and therefore wasted health. It has always been like this. But today it also means wasted material, and both mean wasted capital. In a highly productive nation ornament is no longer a natural product of its culture, and therefore represents backwardness or even a degenerative tendency. (Loos 2002, p. 32–34)

Loos’ “Ornament and Crime” elides the distinctions of aesthetics and economics. His assertion of cheaper production’s fiscal benefits as a social good realised as/in the design (evident in the managerial control exercised over the aesthetics) of the commodity concerns only historically decorative art, rather than the minimal designs of the Modernism he promotes; this reductive aesthetic is common among the avant-gardes of the twentieth century (Calinescu 1987). This “pure” or “clean” or “moral” aesthetic is the beginning of the Formalist art known by the aphorism “Form Follows Function” that integrates ‘the beautiful with the useful’ (Rand 1970, p. 9). Current AI systems reify the agency of managerial decisions as more important than the productive labour expended in facture, thus continuing the unskilled aesthetics at the centre of Loos’ proposal. This lineage is a moral argument that assumes the regimented production of the assembly line is a restraint on the uncultured and degenerate tendencies ascribed to labour—thereby linking social status to the aesthetics/economics of the assembly line, and continuing to affirm the ideology of economic success as a demonstration of moral superiority (Lears 1981). These aesthetic choices implemented as the material form of the commodity reveal the social paradigm of automation that renders workers equivalent to the machinery they operate, a contemporary expression of axiomatic beliefs about the disposability of labour which have remained stable throughout capitalism’s 200+ year history (Braverman 1974; Stiegler 1998; Hall 2001).

Assertions that workers and machines are equivalent is a symptom of this social paradigm validating beliefs that the industrial revolution can be separated from its larger socio-cultural context. Returning these dimensions to analysis illuminates a heuristic recursively tied to the regimentation of the factory where labour is displaced by machinery (Bright 1958). Understanding the digital computer as a ‘thinking machine’ (Durham 1969, p. 10–13) brings the challenge and threat this device poses to the human (or the conception of “the human”) into focus as inherent to industrialisation (Polanyi 2001; Johnson 2006). The range of these automated systems includes both software requiring some degree of human oversight, as well as fully autonomous systems (Munn 2022). The generative production of a commodity designed by human action occupies one extreme in a range bounded at the other end by those machines that perform complex actions without human control, such as driving a vehicle in traffic to a particular destination. Although both types of digital automation are distinct, in that one requires human oversight and the other does not, they converge as expressions of the same ideological demand to replace labour with machinery that does not require wages, creating an absolute financial advantage to using these systems (Warsh 2006), and providing a faux-empirical proof affirming claims that human labour will eventually be completely replaced by AI.

The revolution of “machine learning” attempts to concentrate managerial control while eliminating the costs of human labour. It provides an empirical example of the linkage of cultural belief with the fallacy that human labour can be rendered redundant via utopian fantasies of a post-capitalist future where AI systems perform all labour (Mason 2015; Agrawal 2019; Bastani 2019). Harry Braverman anticipated this use of digital systems in 1974, noting that ‘machinery offers to management the opportunity to do by mechanical means that which it had previous attempted to do by organizational and disciplinary means’ (Braverman 1974, p. 134). Connecting aesthetic, socio-cultural, and economic discourses entangles the expansive nature of automation with its impacts for all of human society, but also typically constrains analysis to only those questions that serve valorisation (economic) processes. Framing the embrace of automation as an ideology demonstrates the interlocking and mutually supporting nature of varied social, cultural, and economic “norms” that claim further expansions of mechanisation and the elimination of human labour are not only desirable, but possible, and immanent. This lineage that differentiates the social status of ownership and managerial labour from that of productive labour by separating mind/reflection/human from body/determination/machine informs the low caste of industrial workers (Graeber 2014; Wilkerson 2023). The moral and ethical claims by Loos and others mask the varied shocks and harms of capitalist accumulation as “natural” functions of power dynamics between classes. This social paradigm is immanent in the European Enlightenment project (Deleuze 1983) that views human agency as definitional for the status of “human” and is culturally expressed by a demand for individual autonomy and self-determination that stands in opposition to the regimentation of the industrial factory (Lears 1981). Acknowledging this heritage evokes the taxonomy of agency (judgement) proposed by Enlightenment philosopher Immanuel Kant at the start of the industrial revolution. Conceiving agency as a concrete, literal model for organising and managing production instrumentalises his analysis as an industrial protocol:

The first alternative is rational and mathematical cognition through construction of the concept; the second is mere empirical (mechanical) cognition, which can never yield necessary and apodictic propositions. Thus I could indeed dissect my empirical concept of gold, and would gain from this nothing more than the ability to enumerate everything that I actually think in connection with this word; but although a logical improvement would thus occur in my cognition, no increase or addition would be gained in it. (Kant 1996, p. 675)

Kant’s proposal creates an implicit hierarchy for human intellectual labour as distinctions in “judgement”—his term for human agency. “Reflective judgement” is the highest aspect of being human and affirms differences in social status—it expresses a cultural rejection of physical labour that performs routine and repetitive tasks. His subtle distinction that allows human imagination to mediate between understanding and intuition becomes an instrumentality apparent in the role of determinative labour as a disposable commodity that does not decide or even evaluate its tasks (Betancourt 2021; Kremer 2022). “Machine learning” expands automation from replacing physical action to encompassing those immaterial tasks that require rote human intelligence in capitalism’s on-going attempts to increase surplus values by replacing paid human labour with unpaid production (Braverman 1974). AI extends and amplifies this regimentation of industrial labour whose distinctions in social standing emerge as the collateral effects of capitalist economics on human society (Veblen 1994; Lears 1981).

The convergence of automation and rote intelligent labour continues a trajectory begun in the 1790s that is logically implicit in Kant’s description of “determinative judgement” as a mechanical thought that acts to “enumerate everything”. This protocol defines the essential nature of rote knowledge by differentiating it from the considerations and inventions of reflective analysis:

Determinative judgment [operates] under universal transcendental laws given by the understanding, is only subsumptive. The law is marked out for it a priori, and hence it does not need to devise a law of its own so that it can subsume the particular in nature under the universal. (Kant 1996, p. 19)

Technical systems instrumentalise the decisions of the designer (manager) who directs its operations, an expansion of the scientific management whose regimentation of the assembly line enabled the conversion of human labour into minimally intelligent linkages. This proposal of a mechanical mode of thought parallels the fragmentation of tasks in the factory, while suggesting the precise technical basis for contemporary AI, which reifies determinative judgement into a literally inhuman protocol: “machine learning” depends on a mechanical enumeration of elements—a cataloguing of features from within the training data—to create the instrumental “system of machinery” that Kant’s philosophical analysis precisely predicts. The unintelligence of AI becomes obvious when considering how determinative judgement matches a novel situation with the established information that is already potentially known; there is no “creation” in AI, only a statistical correlation to established knowledge. The significance of what is produced is relevant only to understanding the work, not to its fabrication (Kremer 2022)—a proposition of “mechanical cognition” that has become literally true with AI—reifying determinative judgement as an autonomous protocol apart from reason or intelligence, and justifying the suggestion that rote human agency can be fully replaced by automated systems. Connections between determinative judgement and immaterial labour become explicit because even intelligent rote tasks involve decisions made within limited degrees of freedom that can be precisely planned in advance (Barker 2011, p. 46–48).

3 Who is Automated

Although “artificial intelligence” (AI) is an ambiguous term, it consistently refers to systems of computer automation that replace human labour (Wang 2019). The question of who is being replaced by AI? connects the social paradigm of automation to economic and political proxies for social class, and although AI is rarely autonomous (Munn 2022), the trajectories of its development move towards ever more restricted roles for human labour (Frey 2017), accompanied by greater distinctions between social classes (Dyer-Witheford 2015). It affirms the social paradigm that elevates reflective agency and denies determinative judgements (employed in rote, productive labour) an equal status. Contemporary arguments that all of humanity can be considered “robots” descend from this refusal of the status “human” for the rote actions of regimented labour (Johnson 2006). Empirical markers for social position such as educational attainment and occupational wages distinguish the labour that is prone to replacement from that which is protected from automation: “C-suite executives”—the class of workers who control how this technology will be used commercially—exploit their social position as managers of corporate and technical development and deployment to protect themselves from the same automated systems that replace their subordinates and staff. The Brookings Institute explains:

[T]hose with bachelor’s degrees will be much more exposed to AI than less-educated groups, and […] the parallel finding [is] that workers in higher-wage occupations […] will be much more exposed than lower-wage workers. The exposure curve peaks at the 90th percentile, suggesting that while middle- and upper-middle-class workers are likely to be impacted by artificial intelligence, the most elite workers—such as CEOs—appear to be somewhat protected. (Muro 2019, p. 11)

AI is not being implemented uniformly; however, the differences noted by the Brookings Institute study receive almost no discussion or analytic consideration in their corresponding report. Those tasks most readily automated by AI resemble the repetitive nature of physical production, but are performed by skilled, professional, intellectual labour manipulating an esoteric set of rules to produce a limited number of predetermined outcomes dictated in advance by those same rules. The skill of this labour lies with knowing when and how to apply the appropriate rule. Automating these skilled tasks deskills production (Braverman 1974; Flusser 2000; Ruskin 2009). With the emergence of an “information economy” and the reconception of creative activity as a commodity, AI renders intellectual labour as a fungible “industrial process” with the data itself becoming an “object” (whose commodity value is enshrined in via intellectual property law) that parallels the role of raw materials in physical manufacturing (Stutz 2004; Fleissner 2006). The commodification of software paired with the use of AI to displace labour maintains and expands the socio-cultural devaluing of labour employing determinative judgements.

However, the skilled labour of “information workers” is not the only form of intellectual activity subject to automation: jobs in office administration, production, transportation, and food preparation, even though they represent only one-quarter of all jobs whose tasks are potentially automatable, are the ones most impacted by AI development (Webb 2019). The scope of workers subject to replacement encompasses a mutually exclusive range that includes complex/creative professional and technical labour (such as art and design) with high educational requirements and social prestige, along with tasks having low prestige (such as personal care and domestic service). However, all these categories are united through a specific type of subservience that reveals their connection to the historical lineage of denigrated workers: they perform determinative labour in response to a specific managerial demand that can be quantified by “machine learning” in the same ways as assembly line production. The social, cultural, and economic impacts of AI and the justifications making those impacts acceptable are dependent on whose labour is automated. AI accentuates and expands the ‘capitalism-democracy contradiction’ (Foglesong 2003, p. 103) through the threat of technological unemployment—an industrial social paradigm that re-emerges via the technological paradigm of AI.

The contemporary embrace of automation devalues labour via the assumption that capitalism is primarily an economic system, revealed by the Marxist axiom that it is defined by human agency traded as a commodity. This proposition elevates Kant’s reflective judgement and denigrates determinative judgement as insignificant by masking/affirming social differentials in cultural authority and power that conceive any autonomy for labour as a loss of control for management. This denigration of labour is also an issue of social class whose cultural expressions exceed its economic role. Nineteenth century art historian John Ruskin’s observations in The Stones of Venice (1853) about labourers as “animated tools” expresses the social paradigm that emerged in tandem with industrialisation, rejecting both determinative labour and the workers who perform it:

Understand this clearly: you can teach a man to draw a straight line, and to cut one; to strike a curved line, and to carve it; and to copy and carve any number of given lines or forms, with admirable speed and perfect precision; and you find his work perfect of its kind: but if you ask him to think about any of those forms, to consider if he cannot find any better in his own head, he stops; his execution becomes hesitating; he thinks, and ten to one he thinks wrong; ten to one he makes a mistake in the first touch he gives to his work as a thinking being. But you have made a man of him for all that. He was only a machine before, an animated tool. (Ruskin 2009, p. 161)

Ruskin’s comments converge on Kant’s determinative judgement since rote tasks require only minimal comprehension to complete and reproduce; they do not generate new knowledge. Ironically, rejecting labour whose ‘culmination is the machine, or rather, an automatic system of machinery’ (Marx 1973, p. 692) informs Ruskin’s appraisal, even as he argues against dehumanising labour. Denying the intellectual capacities of workers employed in factories not only casts them as “living machinery”, it tautologically justifies social and cultural dehumanisation by reinforcing differences in status obvious in the high social dominance belief that ‘those who cannot climb by these ladders [of success] are not worth troubling about’ (Schumpeter 2008, p. 188). It is manifest in the unmet ‘material needs’ posed by poverty, homelessness, and the “precariat”—that social class composed of poorly paid, often informal, or on-demand labour without financial security (Standing 2016). These symptoms are justified by a social paradigm concerned with the appropriate and acceptable uses for machinery, not human workers.

Ruskin’s rejection of factory labour and abhorrence for the perfection made possible by industrialisation are expressions of the same nineteenth century fears about automation and machinery that are explicit in E.T.A. Hoffman’s 1816 story “The Sandman” (Rutten 2021, p. 31–32): Olympia, the machine-girl that Nathanael loves is ‘soulless’ but perfect (‘richtig’)—an automaton who is the antithesis of “the human”—the conduit for an uncanny, nonliving, inhuman being that results in his death (Pascarelli 2002). Ruskin’s aesthetics oppose this mechanical perfection with the human ‘authorial hand’ that becomes a socio-cultural expression of ‘authenticity’ via the ‘hand-made’ that elevates the fragmentary, incomplete, and imperfect (Kaplan 1987, p. 233–235). Commodities with features that would formerly be considered signs of inferior production, such as hammer marks, become valuable demonstrations of this anti-industrial aesthetic in designer William Morris’s Arts and Crafts workshops (Pevsner 2011): proof of being ‘crafted by the honest, simple, hard-working indigenous aboriginal people of wherever’ (Palachuk 2005, p. 41). This ressentiment of errors, flaws, and mistakes defies the uniformity of industrial production as a ‘shadow’ cast by the use of machines (Rutten 2021, p. 33–34); these aesthetics rely on token evidence of agency to elevate unskilled production into a place of greater value (status). Nevertheless, it does not escape the social paradigm of the industrial since without the automated machinery that obviated the need for human skill, these aesthetics of the flawed become incoherent.

4 Taylorism and AI

The autonomous functioning of AI places it outside of human oversight as a manifestation of received dicta that blocks debate, questioning, or investigation by outside observers (Obermeyer 2019). While AI belongs to the historical lineage of ‘technological paradigms’ defined by their disruptive impacts on industrial capitalism (Schumpeter 2008), it is also an expression of industrial heuristics, thereby continuing the domination of agency beyond the assembly line (Kolozova 2015). This social paradigm develops as capitalism concentrates the economic benefits of an expansive marketplace while refusing to acknowledge the fiscal loop of production–consumption. Attempts to remove labour from the circuit of capital, and the resulting prospects for technological unemployment, reveal contradictions in industrialism via its cultural and political entanglements with human society (Benanav 2020). The social paradigm of automation describes what are neither negative outcomes nor unintended consequences.

These rejections of labour’s agency become a literal protocol in Frederick W. Taylor’s The Principles of Scientific Management (1911) that fragments tasks by fully planning how to accomplish each stage of production: the assembly line transforms labour into the ‘intellectual organs’ of the machine within a rigid, rote process (Marx 1990, p. 690–691). The “labour process debate” theorised this replacement of human workers with machines as a mechanism of value extraction that operates as a protocol for social control. It limits productive action to the singular physical task at hand and thus allows its direct conversion into mechanical and autonomous systems, leading to a deskilling of human labour with ‘an ever-widening area and with an ever-increasing acceleration’ that reduces labour’s ability to object or even resist the changes in social status (Friedmann 1955, 41–43 as cited in Braverman 1974, 94). Taylorism thus concentrates Kant’s reflective judgement (agency) in managerial decisions, anticipating William Ross Ashby’s cybernetics (1957), to enact the metaphors offered by Kant and Ruskin as productive directives: labourers only require a minimal comprehension of their specific work’s significance on the assembly line to perform it, and this makes them nothing more than an “intelligent connector”. This refusal of a holistic conception presages the compartmentalisation of algorithmic production as/in software, and matches how AI produces a “creative” commodity such as an AImage (Kremer 2022).

5 Conclusion

Fears about technological unemployment and a “robot revolution” belong to a lineage of fears about slave/serf rebellions that disenfranchise/devalue labour economically, socially, and culturally. AI emerges from a social paradigm justified by the economic functions that dominate discussions of social, cultural, and political questions. Digital technology continues and magnifies these impacts of established ideologies that are determinative factors shaping historical capitalism (Betancourt 2016). Replacing human labour has always been justified by a social paradigm apparent in a wide range of otherwise disparate arguments about machinery and the social significance of the industrial factory. The instrumental desire for disemployment in capitalism is more than merely a fallacy. It shapes the heuristics for AI’s development/deployment, and was predictable from how Enlightenment philosophy serves to justify how machines have always replaced labour. This logical consequence of reifying Kant’s philosophical analysis of judgement as a prescriptive or instrumental system pinpoints a foundational cultural belief about the economic benefits it justifies. This technical lineage conceives the rote tasks performed by industrial labour as “slavery,” and affirms the literal meaning of the very term “robot” (Čapek 1923). Contemporary “machine learning” converges the labour of the skilled professional office worker with that of the unskilled assembly line worker: automation equally dehumanises the tasks being performed and the labour that performs them. The nineteenth century precept that labourers were equivalent to the machinery they operate supports the twenty-first century automation of skilled tasks by AI, while conceiving this replacement of human workers to be an unquestionable and ineluctable moral/social good—thus eliding concerns about human rights, moral obligations, or social protections. This lineage might seem obvious; however, its operations and affects are masked through a process of naturalisation and misdirection, offering an archetype for understanding contemporary attempts to replace human labour with automation (Huws 2003). Acknowledging this social paradigm reveals its structural heuristics: demands for automation and fantasies of replacing all human agency are a dangerous moral hazard that challenges human rights, social liberty, and the pursuit of justice.