1 Introduction

I know how to build a business. You gotta’ get the black people to do it in order to get the white people to do it. Then you gotta’ get the black people to stop doing it. –Dwight Shrute (The Office)

The Office, an American television series that aired from 2005 to 2013, is a critical mockumentary that satirizes the everyday lives of office employees. Achieving a peak viewership averaging over seven million viewers per episode during its most popular seasons and extensive syndication both domestically and internationally, the show became a ratings success and a cultural phenomenon. Its satirical approach not only influenced American perceptions of work culture but also reflected broader societal and cultural issues. The show’s depiction of corporate management styles and mundane office life, as exemplified by the quote from character Dwight Shrute, serves as an entry point for discussing systemic racial and economic exploitations. This dynamic has long seen marginalizedFootnote 1 groups as economic levers, initially utilized for their labor and subsequently sidelined to maintain a status quo that privileges a select few. The United States’ economic ascent, fueled by the toil of enslaved individuals and later the exploitation of minority workers post-abolition, epitomizes this grim reality. Abraham Lincoln’s admission in a letter to Horace Greeley [12]—prioritizing union preservation over emancipation—underscores a consistent preference for economic unity over racial justice.

If I could save the Union without freeing any slave, I would do it...[I]f I could save it by freeing some and leaving others alone, I would also do that. Abraham Lincoln [12]

This paper delves into the interplay between capitalism and racial discrimination, tracing its historical roots and its perpetuation through contemporary technologies like artificial intelligence (AI). Starting with Cedric Robinson’s foundational ideas on racial capitalism, we explore how economic systems historically exploited racial distinctions to enhance capitalist gains, a pattern that is starkly mirrored in today’s AI technologies. These systems, designed and deployed within the same capitalist framework, inherently manifest and amplify the biases and divisions that have been programmed into them, whether intentionally or inadvertently.

As we move into a detailed analysis of AI’s role in modern society, the focus shifts to how this technology, while heralded as a tool of efficiency and progress, actually reproduces and exacerbates inequalities. This is evident in the labor practices within the tech industry, where AI development often relies on underpaid and undervalued workers from marginalized communities, perpetuating a cycle of exploitation and exclusion. Moreover, the deployment of AI in various sectors—from law enforcement to social services—raises significant concerns about fairness, transparency, and the potential for these systems to entrench social divides deeper. The discussion extends to the personal and societal impacts of these technologies, highlighting the real and often overlooked consequences of biased algorithms and system designs that prioritize profit over people.

Ultimately, this exploration is not just an academic exercise but a call to action. It underscores the urgent need for a critical reevaluation of how AI technologies are developed and employed. By integrating principles of social justice, equity, and inclusivity into the heart of AI development, we can begin to dismantle the structures of oppression that have long been sustained by capitalist endeavors. We advocate for transformative approaches that ensure AI serves as a force for good, promoting societal healing and empowerment rather than exacerbating the injustices of the past.

2 Historical background

The concept of racial capitalism, as analyzed by Cedric Robinson [22, as cited in ] [10, p.5,], provides a foundational framework for understanding how capitalism has historically operated by accentuating and institutionalizing pre-existing social distinctions, transforming them into racial differences. Jenkins and Leroy further explain, “Racial capitalism is the process by which the key dynamics of capitalism—accumulation/dispossession, credit/debt, production/surplus, capitalist/worker, developed/underdeveloped, contract/coercion, and others—become articulated through race.Footnote 2 This mechanism of racial capitalism, while analyzed in the context of the United States, is not unique to it nor solely a dynamic between white and black populations. Globally, racial capitalism manifests wherever economic systems exploit racial distinctions for economic gain, regardless of the majority or minority status of the racial groups involved [10].

In the United States, the economic growth was notably founded on the exploitation of minoritizedFootnote 3 labor—particularly that of Black people and Indigenous peoples—a pattern that persisted from slavery through sharecropping, extracting profit by appropriating minority ingenuity while eroding their dignity. K-Sue Park’s [15] account illustrates how Indigenous peoples lost land through strategies like foreclosure, capitalizing on their unfamiliarity with colonizers’ economic practices. This pattern of exploitation and dispossession is a hallmark of racial capitalism, seen both in historical and contemporary settings.

Post-abolition, racial capitalism continued to dehumanize and underpay non-white workers, leading to persistent racial wealth gaps and exclusion. Pedro A. Regalado’s [21] examination of Latinx entrepreneurship in the 1960 s shows how economic opportunities inadvertently intensified internal divisions within the community, reflecting Robinson’s insights into how capitalism amplifies pre-existing social distinctions such as legal status. Additionally, Regalado demonstrates how linguistic vulnerabilities were exploited, a tactic reminiscent of those employed against Indigenous communities centuries earlier, as discussed by Park.

In parallel, artificial intelligence emerges as a contemporary manifestation of these age-old disparities, capable of mirroring and magnifying biases embedded within the data it processes. Just as capitalism leveraged pre-existing social hierarchies for gain, AI risks doing the same, reinforcing racial disparities under the guise of technological advancement. In a sense AI technology, just as capitalism, functions as a machine that exploits existing patterns—in data—to accelerate production—of outputs. Scholars like Ruha Benjamin and Simone Browne caution against an unchecked AI, which could not only perpetuate but also amplify systemic oppressions through sophisticated means of profiling and control. This scenario underscores the imperative to critically examine AI through the prism of racial capitalism, recognizing the potential for these technologies to further entrench inequalitiesFootnote 4 rather than dismantle them.

3 AI: the modern frontier of exploitation

AI doesn’t operate in a vacuum; it mirrors the society that gives it life. The tech sector manufactures tools embedded with social biases, training AI models that ‘think’ like a corporation—which are predominantly steered by white men [1, p.35]—at the expense of ambitions and morals [16]. This predilection for efficiency over human values carries profound implications for the implementation of AI-driven social services and their consequences for marginalized communities. The rapid proliferation of automated systems in these services raises concerns surrounding fairness, transparency, and accountability, driven by the relentless pursuit of efficiency. In this section, we aim to illustrate how capitalism’s influence on AI systems consistently drives them to perpetuate disparities, whether in terms of labor exploitation, unequal time burdens, or the erosion of minority well-being, underscoring how marginalized communities are disproportionately affected across various dimensions of AI development.

3.1 Labor tax: How underpaying minority workers fuels AI development

In their investigative piece The Exploited Labor Behind Artificial Intelligence, Williams, Miceli, and Gebru [24] expose the harsh realities faced by gig workers in the AI industry, who are disproportionately drawn from vulnerable populations. Big tech players in the AI industry such as Amazon and Facebook heavily rely on these laborers, including data labelers, content moderators, and delivery drivers. These workers, despite their indispensable contributions, often endure abysmal pay, constant surveillance, unrealistic quotas, and even safety hazards. Further details from the investigation reveal that data labelers, responsible for critical AI training data, can earn as little as $1.77 per task. In the troubling case of content moderators in Kenya, they are relentlessly monitored to make split-second decisions, even when confronted with disturbing content. These workers, whose labor sustains billion-dollar companies, are unjustly compensated.

Izaguirre [9] discussed a very recent example of labor exploitation within the gig economy from the ridesharing giants, Uber and Lyft. In New York, these companies are set to pay a staggering $328 million to settle complaints that they wrongly imposed taxes and fees on their drivers that should have been covered by passengers. State Attorney General Letitia James unveiled this settlement, emphasizing how Uber and Lyft—which incorporate AI technologies for routing and pricing algorithms—had systematically deprived their drivers of millions of dollars in earnings and essential benefits. The drivers, akin to those engaged in data labeling for AI models, toiled for long hours in challenging conditions while being undercompensated. The repercussions of such labor exploitation have wide-ranging implications, demonstrating the persistence of these issues across different sectors of the gig economy. These drivers, often working long hours in challenging conditions, waited an arduous eight years for justice, emphasizing the enduring time burden in their battle for rights–we will explore the time dimension in the next subsection.

Moreover, racial capitalism also rears its head in other ways in generative media AI applications. Startups have built systems to synthesize fake profile photos by exploiting images of minority models without consent or payment. The marginalized provide raw material—their unvalued data and likenesses—to advance technologies from which they do not benefit. An example of this is the Argentina-based design firm, Icons8, which built generative AI systems to synthesize fake profile photos by exploiting minority likenesses [8]. Though the models’ images provided core training data, they received no payment for this labor, nor any share of the systems’ profits derived from their inputs. Once again, capitalism fueled AI development by commodifying bodies. The models’ data and likenesses became raw materials for tech advancement from which they did not equally gain. This case provides further evidence that we must scrutinize and address the human exploitation underlying much AI progress.

3.2 Time surcharge: The burden on minorities to fix biased systems

In 2019, a pivotal federal study by NIST unveiled the entrenched racial biases within facial recognition algorithms, revealing higher false positive rates for Asian, African American, and Native American faces compared to white counterparts [7]. This disparity is not merely a technical oversight but a reflection of a broader societal issue where fairness is often assessed through a predominantly white lens, perpetuating and amplifying harmful racial stereotypes. These algorithms, failing more frequently with Black and Asian individuals, not only reinforce the misconception of homogeneity within these communities but also echo Cedric Robinson’s argument that capitalism morphs pre-existing social differences into racial disparities, using technology to further these divides.

These biases extend beyond race to gender and age, with algorithms showing elevated error rates for women, the elderly, and the young compared to middle-aged white men. Simone Browne, in Dark Matters [3, p. 113], elucidates how these systems perpetuate stereotypes, such as misclassifying Asian males as females and Black women as men, illustrating how AI models, like capitalism, disseminates racial and gender stereotypes on a massive scale under the guise of technological neutrality.

The case of Brandon Mayfield, as discussed by Browne, exemplifies how biases in AI systems can have real-world consequences. A veteran and a lawyer who had recently converted to Islam, Mayfield was erroneously matched by FBI algorithms to a fingerprint found on a bag containing a detonating device linked to the 2004 Madrid bombing [3, p. 115]. Despite being one of twenty matches, his recent conversion to Islam and status as a veteran, both associated with prejudicial stereotypes, led authorities to single him out. Mayfield’s personal life and beliefs were publicly exposed under a false accusation, and he remained entangled in legal battles to clear his name until November 2010. This wrongful identification underscores how minoritized groups not only bear the brunt of these biases but also face a “Privacy Levy,” paying a significant personal and financial price due to prejudiced technological systems.

This systemic bias embedded within AI technologies serves capitalist ends in several ways. Firstly, it perpetuates a cycle where the burden of correcting and navigating biased systems falls disproportionately on marginalized communities, consuming their time and resources—a “Time Surcharge” that benefits the status quo by maintaining existing power dynamics. Secondly, the deployment of such technologies, especially in law enforcement and surveillance, opens lucrative markets for tech companies, capitalizing on the state’s desire to monitor and control, often at the expense of those already marginalized. In essence, these biased algorithms are not just products of existing societal inequalities; they are tools that capitalism exploits to further entrench its dominance, ensuring that the costs of innovation are borne by those least able to bear them, while the profits accrue to those already in positions of power.

It is important to point out that the concept of a “Time Surcharge” extends beyond the realm of AI and technological bias, reflecting a broader systemic issue that minoritized communities face across various bureaucratic and social systems. This enduring burden of time, as analyzed in the context of racialized administrative burdens by Ray, Herd, and Moynihan in Racialized Burdens: Applying Racialized Organization Theory to the Administrative State [20], is often utilized as a means to punish and control, reinforcing poor living conditions. Similar to the ways in which biased algorithms require marginalized groups to navigate and correct flawed systems, these bureaucratic processes consume excessive amounts of time, serving as barriers that maintain social and economic inequalities. These practices are reminiscent of historical strategies designed to disenfranchise and exploit marginalized populations, perpetuating a cycle where the costs of bureaucratic inefficiency and technological innovation are disproportionately borne by those least equipped to handle them, while the systems in place continue to benefit those in power.

The development and implementation of biased AI systems are stark manifestations of racial capitalism, where technological advancements are leveraged to amplify and profit from racial and social disparities. The need for a concerted effort to address these biases is not just a matter of technical accuracy but of challenging the capitalist structures that incentivize and benefit from the perpetuation of inequality.

3.3 The wellbeing toll: How AI progress erodes minority health

The advancement of AI technology, hailed as a cornerstone of modern innovation, often conceals a human toll disproportionately borne by marginalized communities. D’Ignazio and Klein’s observation [4, p. 183] that the tech industry relies on the precarious labor of older women of color, contrasts sharply with the privileged demographics of Silicon Valley. This labor force, essential to refining AI technologies, faces not just economic exploitation but significant mental health challenges, as Billy Perrigo’s investigations [17, 18] into the conditions of data labelers reveal. Workers engaged in making AI systems safer endure traumatic content exposure for meager wages, suffering profound mental health impacts without adequate support. This reality underscores a capitalist calculus where AI progress is prioritized over the well-being of those who enable it, revealing a stark exploitation that benefits technological advancement and profit margins at a significant human cost.

In her work Automating Inequality [5, p. 173], Virginia Eubanks provides comprehensive evidence of the discriminatory outcomes resulting from automated social services. One striking example is the case of Angel and Patrick’s family. Angel seeking help from a counselor and taking medication to manage her PTSD is undoubtedly a responsible decision aimed at becoming a better-equipped caregiver for her children. However, her use of public assistance—due to their socioeconomic status—to access these essential services has an unintended negative consequence. It affects the family’s AFST (Allegheny Family Screening Tool) score—produced by a predictive model—, putting them at risk of having social services intervene and potentially remove their children. This situation raises a critical question: Why should it be a choice between the parents’ well-being and the children’s welfare? Instead, it should be an ‘and’ rather than an ‘or,’ emphasizing the importance of supporting families as a whole. Moreover, this scenario exemplifies the “Privacy Levy" where the act of seeking private counseling and prescriptions wouldn’t flag the same concerns, highlighting the disproportionate burden placed on marginalized communities.

These instances reflect a broader trend of “algorithmic exploitation,” where the relentless pursuit of efficiency and profit in AI development and application exacerbates social disparities. This exploitation is twofold: firstly, it leverages the labor of marginalized workers under deplorable conditions, enhancing AI capabilities while neglecting the workers’ health and economic stability. Secondly, it deploys these technologies in ways that disproportionately harm minority communities, whether through surveillance, law enforcement, or social services, thereby reinforcing systemic inequalities.

As this section concludes, it’s clear that the capitalist benefit derived from these practices is multifaceted, encompassing direct economic gains from reduced labor costs and expanded market dominance, as well as indirect advantages through the perpetuation of a socio-economic order that maintains a readily exploitable workforce. The prioritization of technological advancement and profit over human dignity and equity reflects a fundamental misalignment of values, where the potential of AI to serve the common good is compromised by a capitalist ethos that values profit above people. Addressing these issues demands a reevaluation of the objectives and ethics guiding AI development and deployment. It necessitates a shift towards a more equitable approach that recognizes the intrinsic value of all labor, respects the dignity of every individual, and prioritizes the well-being of marginalized communities. Only through such a transformative shift can we harness the potential of AI to contribute positively to society, rather than perpetuating the injustices of a capitalist system that exploits the vulnerabilities of the least among us.

4 Discussion and conclusion

This comprehensive exploration reveals compelling evidence that contemporary AI development mirrors and perpetuates the exploitation and marginalization of minoritized groups witnessed under historical racial capitalism. AI not only reflects but also concentrates deep societal biases ingrained in our structures. Despite promises of progress, this emerging technology entrenches social divides, much like capitalism historically intensified differences into racialized hierarchies. Uncontrolled AI poses a significant threat, exacerbating current oppressions and expanding racial disparities across dimensions such as labor, rights, and well-being. Undervalued minority workforces fuel systems they cannot equally access or shape, while flawed data and algorithms covertly deny opportunities and resources. The relentless pursuit of AI innovation disregards the resulting trauma among marginalized builders.

Race and economic exploitation are always already deeply embedded within the framework of AI, inherently so because they are foundational to the capitalist structures within which AI is developed and deployed. So, what can we do? As Pratyusha Kalluri advises, we need to shift our inquiry from simply questioning whether AI is ‘good’ or ‘fair’ to probing how it shifts power. We must advocate for greater inclusion in shaping AI technologies, particularly involving those who have been historically excluded. By involving these communities directly in the creation and regulatory processes, we can begin to address the imbalances of power and ensure that AI development not only avoids perpetuating inequalities but actively contributes to social justice [11].

Confronting and reforming the underlying capitalist structures and ideologies that shape technological development is essential. This approach demands more than technological innovation—it requires a transformation of the values and priorities that guide this innovation. Moving beyond techno-solutionism, we must foster an AI development ethos that is deeply informed by social justice, historical consciousness, and a commitment to dismantling rather than perpetuating the systems of oppression embedded within our society.

The recent incident involving Google’s Gemini chatbot illustrates the pitfalls of superficially and belatedly addressing racism and bias in AI development. Despite its intention to create diverse and inclusive imagery, Gemini produced outputs that were historically inaccurate and racially insensitive, masquerading as authentic historical representations [6]. Often, by the time corrective measures are implemented, the foundational data and structures are already steeped in the biases they aim to eradicate. This situation is akin to attempting to cleanse water once it has already been poisoned at the source. Thus, addressing racism and bias in AI requires more than just post-hoc adjustments; it demands a fundamental overhaul of the underlying frameworks and systems.

In light of such incidents, it becomes imperative to champion AI development processes that are not only technologically sound but are also socially just and inclusive. Several entities are at the forefront of these efforts, contributing through research and advocacy to mitigate biases within AI systems. For example, the Algorithmic Justice League (https://www.ajl.org/) focuses on public engagement and the mitigation of bias in AI applications, highlighting the necessity of community involvement in technological oversight. Similarly, Data & Society (https://datasociety.net/) conducts in-depth studies on the social implications of data-centric technologies, emphasizing the need for interdisciplinary approaches to understand and address the complexities of AI deployment. Additionally, the AI Now Institute (https://ainowinstitute.org) offers critical insights into the rights and liberties of populations affected by automation and AI, advocating for enhanced regulatory frameworks that ensure these technologies serve the public good.

Ultimately, the path forward must involve reimagining the role of technology in society. By fostering an AI development culture that values each individual’s dignity and rights, we can harness the potential of these powerful tools to create a more just and equitable world. This transformative vision requires a collective commitment to dismantling the oppressive structures that underpin current technological practices, paving the way for innovations that empower and heal rather than divide and exploit. To truly mitigate the risk of perpetuating racial injustices through AI, confronting and reforming the underlying capitalist structures and ideologies that shape technological development is essential. This approach demands more than technological innovation—it requires a transformation of the values and priorities that guide this innovation. Moving beyond techno-solutionism, we must foster an AI development ethos that. is deeply informed by social justice, historical consciousness, and a commitment to dismantling rather than perpetuating the systems of oppression embedded within our society.