Abstract
This review seeks to present a comprehensive picture of recent discussions in the social sciences of the anticipated impact of AI on the world of work. Issues covered include: technological unemployment, algorithmic management, platform work and the politics of AI work. The review identifies the major disciplinary and methodological perspectives on AI’s impact on work, and the obstacles they face in making predictions. Two parameters influencing the development and deployment of AI in the economy are highlighted: the capitalist imperative and nationalistic pressures.
1 Introduction
This article reviews recent literature on the likely impacts of artificial intelligence (AI) in the world of work. It is one outcome of a grant-funded project whose aim is to map out the arguments for and against the idea that work is “central” for individuals and communities (see Deranty 2021, and the online repository: onwork.edu.au for other outputs). Arguments for and against the importance of work have a long history (Applebaum 1992; Komlosy 2018), and they gathered renewed urgency with the rise of capitalism. In the last 200 years, each generation has wondered about work and its importance, in constantly evolving technological, economic, social, and political conditions. Today, debates on the centrality of work are shaped to a significant extent by the impact that artificial intelligence and machine learning are expected to have on economies, on social structures, and for working people. This research background explains why the present review is not conducted from a specific disciplinary stance and why it covers the broad array of issues that it does, from the transformations of tasks and the disruptions of existing labour markets to new macro-economic trends, all the way to emerging political struggles. Large amounts of specialized research are being produced on all these topics, and there is a need to view the many methods, assumptions and findings alongside each other. In this paper, we offer a critical review of this recent literature, bringing together the disparate scholarship on AI in the world of work, and critically evaluating the problematic assumptions driving the leading interpretations and predictions regarding the future of work.
Without a doubt, the lack of a specific disciplinary perspective and a broad thematic scope have drawbacks. One can plausibly argue that only through particular social-scientific methods can specific features of an economic and social phenomenon be accurately described. And a broad scope brings with it the risk of missing important details as well as important references in each disciplinary field. However, there might also be benefits to taking an encompassing, non-specialised approach, which might outweigh these concerns. Such an approach might provide a more comprehensive snapshot of existing knowledge on the impact of AI in the world of work. Empirical research about platform work (Tubaro et al. 2020; Casilli 2019; Tubaro and Casilli 2019) illustrates how difficult it is to keep in view all aspects of it at once. Even actors who are directly involved overlook aspects of the process as they consider it from their particular perspectives, based on their own interests and assumptions. A similar one-sidedness might affect specialist research. Studies of AI’s impacts, its expected benefits and harms, are carried out by researchers in many disciplines (computer science, business, economics, management, organization studies, sociology, industrial relations, labour economics, history of economics and of technology, applied ethics, and more), using their disciplines’ particular methodologies. Whilst specialised research gives access to particular aspects of the complex reality of AI, it is also important to have a view of the whole. The approach we take to the existing literature attempts to highlight the viewpoints from which assessments are made and the potential limitations built into these viewpoints. It suggests connections between aspects that tend to be looked at separately in specialist accounts. In order for such connections to become visible, it is necessary to be reflective of the context in which AI is deployed, as powerful background imperatives influence its development and deployment.
In Sect. 1, developing these points about assumptions and scope in the study of AI, we define more precisely what we mean by “critical” in attempting a critical review. In Sect. 2, we focus on issues of technological unemployment. Section 3 is about algorithmic management. Section 4 is dedicated to platform work. The conclusion considers the political dimensions of AI’s impact on work.
2 Methodological considerations
-
a.
Definitions and scope
For the purposes of this study, we do not engage with the thorny issue of defining “intelligence” in Artificial Intelligence (see Wang 2019 for a thorough discussion). Our concern is simply with what AI-based work processes can achieve, what tasks AI can fulfill with, or instead of, human agents. More specifically, we are concerned with what AI can do in the framework of how economic activity is currently understood and organised. Work is a similarly difficult term to define (Budd 2011). For our purposes here, work denotes the activities individuals engage in as part of the production of goods and services, for a profit if they are business owners, for a wage if they are employees, in the commercial, the public or the “third” sector. Such a vision of work is too restrictive, and we have ourselves criticized such a narrow take on work in other debates. Work covers activities of social reproduction, many of which are not counted in current economic classifications, and AI will be involved in these activities as well (Deranty 2021). However, to keep the study within reasonable limits, we use work here in the traditional, restricted sense of formal economic activity.
By AI’s impact on work, therefore, we understand the computational methods that rely on the gathering and processing of data to replicate activities human agents engage in as part of labour processes in the formal economy. These uses of AI include: coordinating machines and industrial processes (in manufacturing); managing all aspects of the workforce (HR, management, WHS); gathering, processing and evaluating information about business activities (accounting, forecasting, investing); predicting and evaluating outcomes for customers (expert advice in legal, medical and psychological matters); evaluating risks and benefits for customers and for internal purposes (insurance and finance); communicating with clients (customer service and counselling); anticipating, creating and managing customer needs and market demand (marketing and advertising); stocking and distributing material goods (logistics), including by transporting them (self-driving trucks and cars); supporting and potentially undertaking theoretical and applied research.
-
b.
A critical approach
The approach we are taking to the vast and diverse literature on AI’s impact on work is “critical” in a sense captured by what might be called the Macbeth question: “Say from whence you owe this strange intelligence” (Act 1, scene 3). The core assumption this question captures is that scientific inquiries are inextricably tied to the historical context in which they occur. That context harbours specific tensions between human groups, conflicts between needs, views and interests, which lead to implicit and declared social–political conflicts. These tensions and conflicts mobilise all kinds of knowledges and even define particular epistemological standpoints. Whether they are aware of it or not, knowledge claims reflect social and political fault lines.
This means that AI did not emerge and is not being deployed in an economic or a political void. Digital innovation, like all modern technology, certainly has its own momentum, yet it is not sufficient to refer to advances in theoretical knowledge and technical improvements alone to explain the actual paths it travels and the forms it takes. The social context plays a direct part in the lines of technological development, both regarding the concrete features that artefacts and processes take, which objects and artefacts are produced and deployed in the first place, and the ways in which objects, processes and networks are put to use. This insight is well established of course (Stamper 1988). It has been validated and explored at length in the philosophy of technology (Feenberg 1991, Feenberg 2002, Feenberg 2012 for instance; Wajcman 2017 from a sociology perspective).
In studying the impact of AI on work then, it is crucial to keep in mind what kinds of imperatives and pressures AI innovation and deployment are under. AI processes are deployed in a context characterised by two fairly uncontroversial features: a capitalistic imperative and nationalistic pressures. These imperatives and pressures influence the paths taken by digital innovation and the forms in which it is deployed.
-
c.
The capitalist imperative
A passage from Max Weber cited by Shoshana Zuboff in her introduction to Surveillance Capitalism (Zuboff 2019, p. 22) makes the point succinctly:
“The fact that what is called the technological development of modern times has been so largely oriented economically to profit-making is one of the fundamental facts of the history of technology.”
As Zuboff adds,
“In a modern capitalist society, technology was, is, and always will be an expression of the economic objectives that direct it into action.”
This applies in particular to AI and machine learning (Pasquale 2015 for an impressive account). Digital innovation has been driven to a significant extent by the attempts at winning the economic competition and increasing profit through the usual methods that are used in capitalistic economics (Brynjolfsson and McAfee 2014 for a candid version; Steinhoff 2021 for a critical, Marx-inspired take). Countless articles and books by business specialists describe the capitalistic advantage in firms investing in AI, with reference to positively connoted concepts like innovation, flexibility, adaptability and so on (for example Daugherty and Wilson 2018). Behind these fancy terms are mundane economic mechanisms (Lu and Zhou 2021 for a recent review). AI has benefited from investment by companies that hope to find in it a new avenue for cutting costs, notably labour (Bessen et al. 2018) and transaction costs (Gurkaynak 2019; Lobel 2018), increasing outputs through rationalization of the production process (Acemoglu and Restrepo 2019), raising productivity (Brynjolfsson et al. 2019), managing the workforce in more efficient ways (Eubanks 2022), including through increased surveillance and control (Bales and Stone 2020), refining customer knowledge, and deliberately seeking to establish monopoly positions (Coveri et al. 2021; Rikap 2021). AI is viewed as a new way of reducing the labour share of income (Gries and Naudé 2018). As the core technology in platforms and for the new business model they incarnate (Srnicek and Williams 2016), AI is seen by advocates of capitalism as ushering in a new, more agile and productive iteration of the system.
-
d.
Nationalistic pressures
The second obvious feature of the current context determining the shape of AI is the nationalistic one. A major driver of AI innovation and development since the 1980s in the US has been the military (Berman 1992; Morgan et al. 2020 on US investment in AI for intelligence and surveilance systems). In more recent years, the battle for geopolitical hegemony has meant that other major powers, notably China, have also invested significantly in AI research (Barton et al. 2017; Allen 2019; Savage 2020; Roberts et al. 2021). The geopolitical factor is directly tied to the economic one. AI development is stoked by an alliance of corporate and military interests: economic competition is one aspect of the battle for geopolitical hegemony, and military supremacy serves to ensure economic prosperity in competition with other nations (Hwang 2018 for the geopolitics of controlling semiconductors manufacturing and AI patents).
Underneath economic and military competition, an ideological battle is underfoot, one that pits the core values held by the different actors. To take one example, the Joint Artificial Intelligence Centre of the US Department of Defense presents its initiative as based on “solutions that are aligned with America’s laws and values”. There is a promise in the statement that AI innovations will deliver the kind of “good AI”, or “AI for good”, progressive-minded researchers and citizens are hoping for (Acemoglu 2021). But there is a more antagonistic aspect to such declarations, namely that the values embedded in the ethics of US-led AI will be American ethics, that is, a particularly American way of interpreting the core ethical norms that are to drive AI programs and algorithms. If we refer to the key norms of bioethics Floridi suggests we extend to AI ethics (Floridi and Cowls 2019), such as “autonomy”, “justice”, even “beneficence”, these norms might mean different things in the Silicon Valley, in the Beijing technology district, and in European labs. The content of those values might well be irreconcilable. Beyond any cynical take on the influence values might actually exert over the development of AI for world-powers and corporations fighting for hegemony, there might also be an ideological battle underway between, say, a liberal-capitalist, a social-democratic and a communist understanding of “good AI”.
3 Technological unemployment
The first area of focus for studies of AI’s impact on work is the threat of technological unemployment, an issue that has captivated imaginations for more than a decade. Debates about the impact of AI on employment revolve around its predicted quantitative impact, how much AI is likely to lead to machines replacing humans in work (2.2). This issue, however, depends on an understanding of the types of work activities AI is likely to perform, which in turn relies on assumptions about the skills involved in the tasks making up different types of jobs (2.1).
3.1 What types of jobs are affected?
AI is introduced in workplaces for increased efficiency in some technical aspect of the work process, or it is introduced explicitly with the aim of replacing human workers and thereby reduce labour costs. Whatever the reasons, a key condition of its success therefore is that it can replicate the outcomes achieved by human workers. Making this point does not commit one to “AI fallacy”, the misleading belief that intelligent machines replace humans by replicating human skill use (Susskind and Susskind 2015). It might well be that machines achieve similar results as human agents via different mechanisms. It remains true, however, that successful outcomes in the labour process remain the baseline condition for the deployment of any workforce, human or digital. Predictions about technological unemployment rely on assumptions about the ways in which human workers achieve the outcomes expected in their jobs. The core concepts in discussions of technological unemployment therefore are those of skill, task, job, occupation and industry.
In much of the literature in the social sciences, notably in labour economics, management and the sociology of work, these terms are taken for granted. Discussions centre on the methods to accurately capture macro-economic trends and micro-economic issues (see typically Autor et al. 2003) but not on the concepts themselves. This seems problematic though. To understand the impact of AI on work, one should not overlook the empirical complexity and conceptual slipperiness of a concept like “skill”. Attewell (1990) and Spenner (1990) show well the range of possible meanings, and how the intuitive appeal of a notion like “unskilled” work in fact is anything but evident, how evaluation of “skillfulness” can shift depending on the perspective taken. Industrial sociologists demonstrate through grounded case studies that intuitive assumptions about “routine work” can be deceptive (Pfeiffer 2016). Researchers in education question the conceptual validity of the concept (Clarke and Winch 2006), particularly after it expanded with the shift from industrial to post-fordist frameworks where a whole array of communicative, social and emotional abilities, as well as personal attributes such as work commitment, were added to traditional craft knowledge (Payne 2000). Concrete issues arising from overlooking the complexity of skill will be discussed in the final part of this section.
It is well established that previous waves of automation propelled by advances in information and communication technology were biased in favor of workers with higher skills, replacing lower skill workers and assisting workers with pre-existing complex skill sets (Acemoglu and Autor 2011, Göranzon and Josefson 2012, Buera et al. 2015, Mellacher and Scheuer 2020). Classical work in labour economics on the differentiated impact of innovation (Tinbergen 1974) has led to sophisticated econometric models to formalize “skill-biased technological change” (Autor et al. 2003; Acemoglu and Restrepo 2020a), which provide economic descriptions of the premium that technological innovation gives workers with higher skill, in terms of wage increase and the very availability of jobs. The question is whether the established tenets of labour economics remain true for AI.
AI works by processing large amounts of data to identify patterns (Van Rijmenam 2019). It does this particularly well when there are set parameters to the data and set aims for the patterns. Because of this, AI technology is particularly efficient in task-oriented, routine environments where large amounts of data can be analysed to identify patterns, make decisions based on those pattens, and produce solutions or efficiency dividends, for instance within banking and finance industries (Neufeind et al. 2018; De Vries et al. 2020). For these reasons, there seems to be a case for interpreting the effect of AI along the same lines as the previous wave of automation through computerisation. That is to say, AI will replace large numbers of jobs and routine work which is often manually conducted and which requires low expertise, hospitality and tourism being typical examples (Huang et al. 2021). There are several factors, however, that complicate this picture significantly.
First, a detailed look at what professionals actually do challenges the assumption that work that appears more complex or requiring higher skills is necessarily equivalent with non routine work. A significant portion of professional work in fact involves routine activities (Ford 2015, 2021; Susskind and Susskind 2015, 2020) such that, even if some “higher” cognitive components are involved (memorisation, or complex judgement, or evaluation), “higher skill” jobs are themselves open to automation by AI. In legal practice, for instance, Susskind’s own area of expertise, PerfTech.AI’s “Artificial Law Clerk” promises vastly increased accuracy and productivity and overall reduced costs.
Second, the most striking aspect of AI-based automation is the capacity of machines to operate autonomously, to “learn” rather than function solely on preset patterns. As a result, labour economists, notably Acemoglu, Autor and Restrepo, highlight AI’s relationship to “high skill automation” (Acemoglu and Restrepo 2018a, b), which compounds the exposure to automation of “higher skill” jobs. The Susskinds’ related prediction of “the end of professions” is corroborated by a number of reports (for instance Manyika et al. 2017). In the area of medical diagnosis for instance, a number of AI systems (VizAI, PathAI, Buoy Health, Enlitic) are already operating, that complement and in the future might fully replace human specialists in symptoms diagnosis and treatment advice.
Third, many entry level and manual jobs are in fact not routine and, therefore, not easily codifiable (Goos et al. 2014; Autor et al. 2015, Barbieri et al. 2020). Some basic-skill jobs are thus uniquely immune. This rests upon the famous Moravec Paradox which “refers to the striking fact that high-level reasoning requires very little computation, while low-level sensorimotor skills require enormous computational resources.” (Van de Gevel and Noussair 2013, 14). Some skills which come naturally to human beings require massive amounts of computational power to replicate and consequently, “it will be hardest for new technology to replace the tasks and jobs that workers in the lower skill level occupations perform, such as security staff, cleaners, gardeners, receptionists, chefs, and the like” (Gries and Naudé 2018, 4).
Finally, human skills can be complemented rather than copied by automated processes, with machines taking charge of the routine aspects of the job (Autor 2014; Brooks et al. 2020; Alarie et al. 2018; Ekbia and Nardi 2017). For instance, in the context of legal work, JP Morgan Chase uses an AI program called COIN which interprets commercial loan agreements and saves an estimated 360,000 work hours per year (Wall 2018). And human work can complement machine work, through “humanly extended automation” (Delfanti and Frey 2021). This is the case (Ebben 2020) when low-skill tasks continue to be fulfilled by humans, in the service of automated processes, for instance in warehouse work. Zhang et al. (2021, 4) observed that “human workers’ ability to avoid errors was greatly augmented by AI applications, while the capabilities of AI applications were constantly strengthened based on feedback from human workers.” In cases such as these, human work continues to be performed by humans, not because it is difficult to automate, but because humans are better at it, or cheaper to employ, than machines.
3.2 Substituting, complementing, or creating human work
3.2.1 Pessimistic scenarios
Many economists and technology experts contend that AI will substitute for human work at such a scale that social-economic organisations will be shaken to the ground as a result. This is a major aspect of debates on the centrality of work today, often the initial argument for “post-work” models of social organisation (typically Danaher 2019).
Universally cited references are, first of all, Brynjolfsson & McAfee’s publications, notably Race against the Machine (Brynolfsson and McAfee 2011), and The Second Machine Age (2014). The two business and technology experts extol the capacity of intelligent machines to lift productivity, massively increase outputs and spur wealth creation, driving prices to zero for some commodities. Their celebrations of the digital revolution, however, come with warnings about the severe impact of AI-driven automation on labour markets, as technological advances create more losers than winners because of skill and capital bias. The policy solutions they call for are premised on the dangers of automation, and of AI in particular.
Another ubiquitous study is Frey and Osborne’s 2017The Future of Employment: How susceptible are Jobs to Computerisation? (2017) The study was cited over 10,000 times at the time of writing (Google Scholar, January 2022). The two business scholars famously predict that 47% of all jobs within the U.S. are at risk of technological replacement within two decades. Using a similar approach, Bowles arrived at an even higher figure, claiming that 54% of jobs in the EU and USA were under threat in the same time span (Bowles 2014). Since then, many studies, using a variety of methods have added to these anticipations (Benzel et al. 2015; Bruun and Duka 2018; Halal et al. 2017; Schwab 2017; Chessell 2018; Gruetzemacher et al. 2020; Gruetzemacher et al. 2021). In a multi-country approach covering 32 nations, Nedelkoska and Quintini (2018) estimate that 14% of jobs are highly automatable (probability of automation over 70%), 32% have a risk of between 50 and 70%. The figures are confirmed by Pouliakas (2018) via a method that uses disaggregated job descriptions in a key survey conducted by the European Union (the European Skills and Jobs Survey, covering 49,000 EU adult workers), a method that allows him to factor in information on skill requirements (for China, Zhou et al. 2020; Xie et al. 2021).
Beside displacement, another trend widely anticipated is the polarisation of labour markets (Brito 2020; Korinek and Stiglitz 2019), similar to what occurred in the previous wave of automation (Autor et al. 2008; Goos and Manning 2007; Autor and Dorn 2013; Michaels et al. 2014; Goos et al. 2014; Graetz and Michaels 2018; Frey 2019; Autor et al. 2015; Scarpetta 2018; Bordot and Lorentz 2021).The reinstatement effect (see next section) might favour only workers with specialised skills (Frank et al. 2019; Holm and Lorenz 2021), whilst new jobs might be created in occupations that are “technologically lagging”, where automation cannot enter for price reasons, but lower skill attract lower wages and precarious conditions (Petropoulos 2018). Consequently, AI might lead to a hollowing out of white-collar jobs in business, administration and knowledge industries. These concerns are confirmed by a European Parliamentary study (Deshpande et al. 2021).
3.2.2 Optimistic scenarios
Many economists, historians, business scholars and executives reject such pessimistic visions of massive technological unemployment (Moghaddam et al. 2020).
Large surveys of employers present contrasting evidence. A key Manpower survey in 2017 (ManpowerGroup 2017) provided interesting figures, with managers in some countries expecting to substitute workers for machines, whilst others expected AI to increase hiring. Overall, the survey demonstrated optimism (also ServiceNow 2017). Via a questionnaire targeting 3000 companies, Bughin (2020) concludes that labor redistribution will occur.
Global accounting firms, banks and business consultancy groups are often adamant regarding the potential for AI to increase productivity overall (Saniee et al. 2017). Purdy and Daugherty in a 2016 Accenture report (Accenture 2016) estimate that AI has the potential to increase labor productivity across the board by up to 40% in 2035. The report suggests that the highest growth sectors are likely to be information and communication, manufacturing, and financial services. Gillham et al. (2018) in a PWC report estimate that global GDP would increase by 14% by 2030, an equivalent of up to $15.7 trillion. All geographical regions of the global economy are said to benefit. A 2020 McKinsey reports evaluates the annual value increase of AI in the banking industry at $1 trillion, or 15.4% of sales.
Some empirical work supports these predictions, with studies published by international institutions reporting the absence of any current impact on job markets (Georgieff and Milanez 2021 and Lane and Saint-Martin 2021 for the OECD). The ILO has published many reports on the future of work, which tend to be cautiously optimistic (for instance ILO 2018). A 2018 study by the World Economic Forum (2018) predicts that automation will result in a net increase of 58 million jobs, with a total 133 million new roles created and 75 million current workers displaced. In a recent OECD paper, Squicciarini and Staccioli (2022) study the impact of natural language processing techniques on specific occupations and find no significant effect on employment.
A study by Acemoglu, Autor, Hazell, and Restrepo (2020) concludes no significant effect yet in the last decade at the occupation or industry level in the US. This confirms Autor’s work of the early 2010s, in which he cautioned against overly pessimistic conclusions about technological unemployment (Autor and Handel 2013; Autor et al. 2015)
The standard approach to discuss the likelihood of technological unemployment is by labour economists who devise methods to extrapolate the impact of technology on particular tasks and deduce from it the impact on occupations and industries more generally. Using this approach, Frey and Osborne’s earth-shaking predictions on the impact of AI was refuted by Arntz et al. (2016). In their report for the OECD, the German labour economists modified Frey and Osborne’s approach by employing an alternative method for linking tasks to occupations (see also their discussion paper Arntz et al. 2019). With this new method, they found evidence for much lower outcomes, around 9% across the OECD (a high of 12% in Austria) and as low as 6% in South Korea. In another noteworthy approach, Princeton computer expert Edward Felten and his colleagues developed a model for how computerisation and specific AI functions can affect the activities making up particular occupations. They also land on more measured conclusions (Felten et al. 2018, 2019, 2021).
One approach that can be used on its own but is also often combined with labour economics modelling, draws on evidence from economic history. The history of automation to date demonstrates that new technologies so far have consistently created new jobs, both directly and indirectly (David 1990). In an important review of the literature, which combines a historical approach with a task-focused one, Ernst et al. (2019) conclude that AI in fact is likely to play out in ways different from previous waves, by increasing productivity and potentially creating more inclusive growth, provided correct educational measures are taken in countries that are part of globalised labour chains.
At the heart of the debate are arguments from mainstream economic theory. AI is viewed by many business experts as a technology that can spur innovation and provide competitive advantage (Davenport 2018; Polak et al. 2020 for the finance sector). In the pharmaceutical industry, AI is already used to spur innovation in a number of ways: for instance, to cite just three platforms, by mapping rare diseases (BERG platform); predicting the pharmaceutical properties of small-molecule candidates for drug development (XtalPi’s ID4 platform); or identifying new therapeutic uses for already validated products (BioXel). Comparable innovation effects are expected in other industries where the innovation process is inseparable from the research process (Cockburn et al. 2019). Innovation brings with it a “productivity effect”: increased productivity in one sector raises labour demand in other sectors. As Smith (2020, 134) explains, “the automation of one industry means higher demand for labor in other industries like the production of machines, the cultivation, extraction, or processing of raw materials, and the building of infrastructure like ports and highways.”
The productivity effect plays out in complex ways. Amongst the many economists to have studied it, the most influential ones in current debates on AI-driven automation are Acemoglu, Autor and Restrepo, and the authors they have collaborated with. In their contribution to The Economics of Artificial Intelligence (2017) Acemoglu and Restrepo summarise in plain terms the complex logics associated with the productivity effect. First, as noted, there is a rise in the demand for labor that follows automatically from economic growth triggered by innovation. In theory the demand for labour can be witnessed even in sectors where automation occurred. Second, demand for labour can increase because automation triggers increased demand for capital. Third, automation can deepen existing automation, which increases productivity without substituting labour (since it is machines that are improved). And fourthly, and most importantly, AI automation creates new tasks. The creation of new tasks is a key component of the labour economists’ arguments against universal technological unemployment.
Another macro-economic argument highlighted by Autor (2015) is captured in the image of the O-ring. It is worth citing a passage at length, as it illustrates many of the points raised by economists:
“tasks that cannot be substituted by automation are generally complemented by it. Most work processes draw upon a multifaceted set of inputs: labor and capital; brains and brawn; creativity and rote repetition; technical mastery and intuitive judgment; perspiration and inspiration; adherence to rules and judicious application of discretion. Typically, these inputs each play essential roles; that is, improvements in one do not obviate the need for the other. If so, productivity improvements in one set of tasks almost necessarily increase the economic value of the remaining tasks. An iconic representation of this idea is found in the O-ring production function studied by Kremer (1993). In the O-ring model, failure of any one step in the chain of production leads the entire production process to fail. Conversely, improvements in the reliability of any given link increase the value of improvements in all of the others. […] Analogously, when automation or computerization makes some steps in a work process more reliable, cheaper, or faster, this increases the value of the remaining human links in the production chain.” (Autor 2015, 6).
One specific dimension of the O-ring mechanism is that, by allowing for increased automation in the industrial and manufacturing sector, AI might have a “multiplier effect” in service occupations connected to them (Berger et al. 2017 for developing countries). On some accounts, this effect might even be felt in manufacturing industries servicing automated factories (Goos et al. 2015).
One other argument combining economic and historical dimensions relates to the specificity of productivity increase resulting from AI. It is a well-known fact that productivity has been stagnant in developed economies over the past seventy years, with growth per decade decreasing from 2.3% in the 1950s, to 1.8% in the 2010s (Gries and Naudé 2018). Lewis and Bell (2019) has shown that labour productivity growth in the UK since 2007 was the lowest decade on average since the eighteenth century. Even in the last decade, productivity growth slowed significantly (Brynjolfsson et al. 2019). As early as 1987, economist Robert Solow (1987) famously quipped: “you can see the computer age everywhere but in the productivity statistics”. Some researchers claim that AI may be the key to reversing this trend. As Munoz and Naqvi (2018, 1) write, “the world is seeing a solution to its productivity woes and the answer lies in the rise of Artificial Intelligence.” Leading economic historian Joel Mokyr agrees (Mokyr 2018) and is resolutely optimistic about AI’s potential to restart growth. In response to the objection that increase in the productivity cannot be viewed in statistics today, Brynjolfsson et al. (2019) argue that technological innovation suffers from an “implementation lag” and so accurate measurements of the impact of AI on productivity are yet to be known.
Another dimension of accounts defying pessimistic and dystopian scenarios relates to the kind of jobs that AI might make possible, with the claim that it might make many jobs, either existing ones or new ones, more satisfying (Paschkewitz and Patt 2020), as they might involve higher skills or more creativity from the workers (Makridakis 2017; Eglash et al. 2020). AI processes might also improve working conditions. For instance, cite evidence that AI implementation supports job satisfaction of pharmaceutical workers, by encouraging “increased contact with the hospital patients, the upskilling of tasks, and the interdisciplinary learning it afforded them” (p.108). For many other workers, AI “can reduce the risk of dangerous or unhealthy working conditions, encourage the development of specialist or soft skills, and improve accessibility to certain jobs” (Deshpande et al. 2021, 8).
3.2.3 Critical assessments
-
a.
Methodological doubts
Predictive exercises must meet formidable methodological challenges. The example of labour economics is informative. To assess the trajectories of labour markets, labour economists and computer experts first make lists of skills and abilities associated to particular occupations, which they gather from the datasets of national labour offices and other organisations (typically the Burning Glass Labor Insight or the O*Net in the US), or from online job vacancy listings (Acemoglu et al. 2020). They then use statistical tools to connect these lists with what AI processes are assumed to be able to perform. The competitive tipping point at which AI becomes financially attractive and thus substitutes for labour is calculated via established models of neo-classical economics. The new models these exercises produce are standardly tested against established employment trends that responded to previous technological innovation. For the external observer, these methods seem to deliver substantial lessons for understanding past trends, but are far less convincing when it comes to future ones. The frequency with which the mathematical models are revised, with new parameters and axiomatic hypotheses being introduced in each new paper, gives the non-specialist the sense that there is a gap between the assurance with which conclusions are stated in plain English and the ability of the models to capture reality, notably given the number of idealizing assumptions underpinning the analyses. The models appear to describe mathematically consistent worlds, but it is less clear that they describe our messy one, let alone what it might be in the future. At the macro-level, the factors involved in economic reality across different context are so numerous and variable, one would assume significant unpredictability in how AI might complement rather than replace human work. As Acemoglu, Autor and Restrepo themselves have shown, the complementary effect leads to the creation of new tasks, notably as worker’s time is liberated from routine, existing jobs take on new content, and new needs for new tasks arise. New technology creates entirely new jobs (Wilson et al. 2017). By definition, the lay reader is tempted to say, if needs, tasks and jobs will be new, it seems difficult to guess what they might be, let alone capture them in models premised on existing job profiles, and on the descriptions of the tasks entailed in currently existing occupations (see the 2018 ILO literature review for precisely this point; also Arntz et al. 2019).
The predictive challenge is compounded by the slipperiness of skills and tasks noted above, at ground level so to speak. Methods for making predictions on the impact of AI on tasks using large datasets seem ill-suited for capturing the complexity at the micro-level of what particular jobs actually involve in their specific context (Ebben 2020). This is true not only for technical reasons, because many jobs are actually more difficult to perform than is often assumed, but also for economic ones: as De Stefano makes the case (2020), assuming that automation necessarily increases productivity might in some cases be based on an overly narrow view of the tasks automated and on erroneous measures of efficiency.
Similarly, methods that rely on case studies only inform on a particular job or occupation at a particular time and place, and there are methodological risks in generalising from particular cases. Many researchers highlight the difficulty of generalising from one national context to another, as different structures of the economy, the education system, and so on, mean that AI impacts work differently in different economies (Luksha et al. 2015; Spencer and Slater 2020). In a journal that has published key studies in this area, an important review by Clifton et al. (2020, 11) highlights that, “the impact of technology on employment is not deterministic—the deployment of these new technologies is contingent upon a multitude of factors, including public policy, firm strategy and geography, among others.”
Finally, external conditions directly impact on the deployment of AI. As Hwang shows in a landmark study (2018), even though they are taken for granted in most reports, computational power supported by adequate hardware (notably quality microconductors), and energy availability, are basic material conditions of AI systems. This makes them non-trivial conditions that need to be taken into account when calculating the likelihood of human tasks and jobs being replicated in the real world. Geopolitical, economic, resource limitations might well slow down or hamper AI deployment simply because material support is lacking. Similarly, in a world prone to climate crisis, the environmental impact of AI (Corbett 2018; Dhar 2020; Lucivero 2020; Strubell et al. 2020; van Wynsberghe 2021) might well constrain its deployment, at least if sustainability becomes a serious parameter in economic activity (Nishant et al. 2020 for a contrary optimistic view).
-
b.
Marxist critiques
A number of authors consider AI specifically as a form of capitalistic innovation, and show that the logic of capitalism dictates that work is a long way from becoming obsolete (Spencer 2018). The absence of technological unemployment is in itself nothing to celebrate though, as it coincides with higher levels of precariousness, underemployment and exploitation.
Drawing on Cohen’s classical reconstruction of historical materialism (2000), Barbara Nieswandt (2021) shows that private property in the means of production and the profit motive make it unlikely that AI-based automation will lead to massive job losses. Strict property rules mean that technological innovation is exclusively owned, put to use, and the outcomes of its productivity-raising potentials captured, by private owners. The capitalist imperative means that the deployment of technologies in a capitalist context is guided exclusively by the search for profit. The combination of these two factors means that technological innovation in a capitalist economy serves to increase output as a way to increase profit. Other alternatives cannot be countenanced given the rules of the game. This is true not just of the altruistic alternative that would mobilise technology to reduce working hours, but also of the possibility of using productivity to reduce the wage bill whilst keeping production constant. Given the other tools capitalists possess to ensure the exploitation of workers, the search for profit is better served by increasing output than job churn, which means that there is no incentive to shed jobs. As Dinerstein et al. (2021) concur, the decisive factor to consider is “not technological opportunity” but the “profitability criterion”.
Another argument draws on readings of Marx that emphasise class antagonism as a key explanatory factor in the organization of capitalist production, including in its adoption of technological innovation. A major use of technology is to bypass labour forces when they are well organized and push back effectively against capital imperatives (Mueller 2021). When the capacity of labour to organize is weakened, the need to shift work to capital to bypass labour is less pressing. This argument is confirmed by Fleming through his focus on power in organisations (Fleming 2019). Labour’s current deficiency in power compared to capital’s means that automation is “bounded”. This is compounded by the cheap cost of labour, itself an effect of weakened labour protections. When human labour is relatively cheap, there is no incentive for capital to invest and deploy expensive technologies. Surplus value can be extracted just as well from human workers (Dinerstein 2021).
Third, in the current phase of capitalism, with its distinctive property structures and ideological underpining, it is shareholder value, not productivity that matters. If shareholder value can be ensured through other means than investment in technology, which comes with significant sunk costs, then that path will be chosen. With the sophistication of financial tools and the protection of promiscuous taxation schemes, there is no pressing incentive in many industries to invest in technology designed to replace human labour (Smith 2020). In other words, even if the previous arguments did not obtain, it would still be the case that the current economic context does not incentivize massive deployment of job-replacing technologies.
A fourth argument (Benanav 2020; Smith 2020) concentrates on the differential impact of productivity across sectors of the economy. Overcapacity in manufacturing and agriculture leads to an exodus of workers towards service and care sectors, where productivity is achieved through wage suppression and depreciation of working conditions. Already in the early 1990s, Gorz predicted that automation would result in the rise of new occupations centering on personal services, to tend to the needs of an elite of technical and knowledge workers (Gorz 2011). This triggers employment in new service sectors, even a return to older forms of dependent labour. These new occupations come with low wages, precarious working conditions and tenure and uncertain hours. But there is no massive job churn as a result of massive deployment of automated work processes.
These Marx-inspired analyses of technological unemployment are supported by the analyses of historians who directly contradict optimistic readings like Mokyr’s, and emphasise the slowing down of innovation under financial capitalism, where profit maximisation occurs through speculation rather than changes in industrial paradigms (Gordon 2014, 2015).
4 Algorithmic management
In this section, we shift from macro- to micro-issues of work, where AI is already having an impact.
Algorithmic management covers the tasks traditionally performed by human managers: the hiring of employees (from CV selection to automation of the hiring process), optimisation of the labour process (through the tracking of worker movements, for instance GPS tracking or route-maximisation in transport and logistics), evaluation of workers (through rating systems), automated scheduling of shifts, coordinating customer demand with service providers, monitoring of workers behaviour, algorithmic incentivisation (through algorithm-based “nudges” and penalties) (Duggan et al. 2020 for a thorough review). Algorithms are widely used, by companies such as Airbnb (Cheng and Foley 2019), Uber (Möhlmann and Henfridsson 2019; Muller 2019; Amorim and Moda 2020), and Amazon (Park et al. 2021; Chesta 2021) in precisely these ways, to manage, direct, recruit, evaluate, and even terminate workers. Business scholars highlight the technology’s ability to improve workflows, for instance for optimal job allocation (Jarrahi et al. 2021), to cut costs, say in hiring, and to improve predictive power in all dimension of the business activity. From this point of view, AI-based algorithmic management offers organisations the chance to delegate decision-making power to more efficient and effective managers (Von Krogh 2018; Araujo et al. 2020).
AI needs data regarding workers’ skill, time use, and behavior, which in turn makes worker monitoring a necessity. The more data are fed into AI processes, the more effective its use (Gal et al. 2020; Ebert et al. 2021). Some aspects of worker monitoring seem benign and might even be benevolent, as when it is used to increase digital security, prevent fraud, or to monitor and improve worker health and safety (De Stefano 2019). However, many critical management and organizational theorists, labour lawyers and sociologists of work, as well as scholars studying human–machine interactions, highlight concerns with the spread of algorithmic management (Schlund and Zitek 2021). AI further increases the power imbalance between managers and employees (Jarrahi et al. 2021), notably as a result of the asymmetry of information. In a lucid report from the Data and Society Research Institute, Mateescu and Nguyen (2019), usefully summarise these concerns around four main points.
Surveillance and control: algorithmic management raises obvious issues of privacy (Bhave et al. 2020; Ebert et al. 2021; Fukumura et al. 2021; Tsamados et al. 2022), not just at the workplace, but also at home, notably following the pandemic-induced shift to home-based working (Collins 2020). Privacy infringements can occur at all stages of the data cycle: at the time of collection, in the analysis of the data, in the use of the data, and when data ought to be erased. Breaches of privacy touch on a fundamental human right, but they also represent a strong leverage tool for managers, with which they can exert control and undermine autonomy (Shapiro 2018). Haenlein et al. (2022) give the example of a technology such as Status Today, which “can scrutinize staff behavior on a minute-to-minute basis by collecting data on who sends emails to whom at what time, who accesses and edits files, and who meets whom and allows firms to compare such activity data with employee performance.” Surveillance can lead to increased pressure on workers to perform, taking away moments of respite, as is well documented in warehouse (Hanley and Hubbard 2020) and platform work (Newlands 2021a, b). This can have severe and long-term impact on well-being. Algorithmic control of the work process takes away the dimensions of personal intervention, choice and even of creativity (Huang 2021).
A key study of algorithmic management, emphasising the contestation between management and employee around the new tools of control and coercion offered by AI, is Kellogg (2020), which uses labour process theory as its framework (Gandini 2019 for gig work). This framework is particularly apt for studying the concrete ways in which the capitalist imperative translates into the attempt by management to control the workforce at the point of production. Kellogg is valuable for its survey of the literature that makes the case for algorithmic management as a new tool for increased efficiency in the running of organizations, through better decision-making, better coordination and better organizational learning.
Algorithmic management might perpetuate societal biases and reproduce discriminatory practices at work, whether the discrimination is built into the algorithms, or management’s use of the algorithm, or as a result of customer’s rating of workers (Noble 2018; Benjamin 2019; Kellogg et al. 2020; Akter et al. 2021; Zajko 2021; Heinrichs 2022). Increasing amounts of empirical evidence confirm this risk (Obermeyer et al. 2019; Datta et al. 2015; Lambrecht and Tucker 2019). AI also gathers and processes data in ways that are often hidden from workers, leading to decisions made in ways which humans cannot, or at least cannot quickly or efficiently, process. Indeed, if AI processes could be understood and tracked transparently, this would undermine the point of having them in the first place (Buchanan and Badham 2020). Algorithm-based decisions emerge as from a “black box” (Pasquale 2015) or “magic box” (Thomas et al. 2018). Algorithmic management, therefore, makes a unique demand for the “blind trust” of workers (Leicht-Deobald et al. 2019). Workers are managed and directed through reasons they are not provided with, preventing collaborative or even consultative leadership styles, substituting a directive or even coercive one in its place (Dunphy and Stace 1993).
Finally, the literature raises accountability concerns. Management by AI is by definition about removing the human element from the decision-making process. A number of Human Relations Management experts view this with skepticism (Duggan et al. 2020). The machinic objectivity of AI-based management processes has the effect of creating a screen between worker and organisation, and between worker and management. The company appears to be absolved of its responsibilities, removing existing avenues through which workers understand and demand these responsibilities are met. Workers are put at the mercy of processes they have little control or recourse over (Loi et al. 2020, Veen et al., 2020; Purcell and Brook 2020; Joyce and Stuart 2021). These concerns overlap with debates surrounding algorithmic decision-making transparency standards, recently canvased by Günther and Kasirzadeh (2022).
5 Platform work
The power of AI for gathering and processing vast amounts of data can be harnessed in traditional work settings where employees are hired under work arrangements and labour contracts predating AI automation. One of the interesting lessons of Srnicek’ 2017Platform Capitalism is to draw attention to industrial platforms operating in factory settings, pointing to an aspect of AI far less visible than its use in gig work. However, the computational power of AI allows it to function not just as a new industrial tool within pre-existing work processes. It also becomes the centerpiece of a new business model that radically alters modes of working, as well as the conditions of employment and the interactions of workers with management and customers.
Since they first emerged in 2008–2009, AI-backed platforms have attracted a vast amount of scholarly attention. Analysis of their structure and functioning has been performed mostly by communication theorists and media specialists, and the analysis of the new modes of work and employment they generate by sociologists, management experts and organizational theorists.
One way to conceptualise platform work is by focusing on the relationship between worker, employer and customer. This was the approach taken by Duggan and colleagues in a 2017 review of literature on “gig work” published in a journal of Human Resources Management (Duggan et al. 2017; also Schmidt 2017). This focus leads to a useful distinction between capital platform work, crowdwork and app-work. The first corresponds to what is known as the “sharing economy”, where individuals use the platform as a digital commodity market to sell goods or assets (like usage of their accommodation via Airbnb). Workers here operate like small entrepreneurs. In crowdwork the platform is like a digital labour market, where jobs are tendered to potential workers and the work is be performed via the platform (as in Google’s Mechanical Turk). There are different types of crowdwork, whether large tasks are divided into small tasks performed by different individuals, or similar work is performed simultaneously by several individuals. In crowdwork, the relationship of worker to employer is minimal. App-work is work provided in the physical world (food delivery, ride-hailing services) where the platforms connects worker and customer. This is the type of platform work that has attracted the most attention, notably because management issues are prominent in it.
To find some bearings in the large literature dedicated to platform work, one of the most useful resources to consult is the review by leading sociologists of work Vallas and Schor (2020). Their sociological lens invites them to construct a taxonomy based not on employment relations but on the groups performing different types of work (notably along a scale of skill complexity). This taxonomy offers another entry point to study conditions of platform work (income levels, geographical location, terms and conditions) and the issues each group of worker encounters specifically. The sociologists identify five groups of platform workers: architects and designers of platforms; cloud-based consultants; gig workers; micro-task crowdworkers; and content producers who perform “aspirational labour”.
The study proposes a map of the rich terrain of recent studies of platform work. Four thematic families are identified and critically discussed.
The first is the utopian view of platforms as an instrument enabling individuals to share goods and services outside the corporate form, where work evades the traditional management interaction, a form of production and exchange which some authors think will boost economic activity and create a new form of capitalism by reducing transaction costs, and as the peer-to-peer interaction fosters trust. A key reference here is Sundararajan 2017. Vallas and Schor reference a large number of studies on working conditions for app-workers (see the more recent Moore and Woodstock 2021), and conclude that power asymmetry rather than horizontal democracy has so far been the experience of workers.
The second perspective is the polar opposite, raised by researchers who see in platforms a new form of Weberian “iron cage”. Rahman’s (2021) focus on “invisible cages”, for instance, explores the experiences of freelance platform workers contending with opaque algorithms evaluating their performance and dictating their future success. Here, the emphasis is on the different forms of control platforms enjoy in comparison with traditional workplaces, notably through surveillance, monitoring, but also the gamification of work, symbolic rewards and inducements (Galiere 2020; Perrig in Moore and Woodstock 2021). As Vallas and Schor (2020, 278) write, “Max Weber’s fears regarding bureaucratic subordination (the iron cage, however translated) pale in comparison with the prodigious powers over human labor that digital technologies are thought to enjoy.” However, this view of platforms underestimates the capacity of platform workers to evade control, resist and organise.
The third image of platforms sees in them an economic model that only accelerates a process of precarisation that was already under way under previous regimes. Vallas’ own work, some of which in collaboration with Arne Kalleberg, another leading sociologist of work, is one major reference here (Kalleberg and Vallas 2017, 2018). Vallas and Schor reject an assumed view of platform workers that is overly homogeneous. Many platform workers in fact use the work to complement income, precarisation through platform is far from a universal trend.
The fourth family of studies focuses on platform technology’s ability to be put to very different uses depending on the institutional context. Platforms can create new forms of control, or instead help to regulate the work and protect the workers. Against this, the two sociologists point out that there are fixed attributes to platform work, that resist institutional shaping, indeed that the most powerful platforms are the ones that shape their institutional environment.
Vallas and Schor themselves present an alternative image, one where power is centralised for the key technical and economic functions, but control is distributed and largely relaxed. Platforms, they argue, provide a new mode of governance and model of economic activity. Most importantly, “platforms greatly relax personnel selection criteria and affords workers considerable autonomy over when and how often to work” (2020, 283). One feature of this relaxation of management control, is the heterogeneity of the workforce this produces, which acts against organisation. Whilst AI can be used in traditional settings for increased surveillance and control, on platforms, the two sociologists argue, surveillance cannot be as strict. The heterogeneity of the workforce exists also in the locations of the workers, who are scattered around regions, and even around the world. This creates isolation and again prevents organisation for collective action.
One aspect of crowdwork that is worth highlighting is micro-tasking crowdwork that goes to the heart of AI. As sociologists of technological innovation highlight, a lot of human labour is needed to fill the cracks of AI (Gray and Suri 2019; Tubaro and Casilli 2019; Tubaro et al. 2020). This human labour is often obscured as “corporate communication highlights the role of technology, not human contribution, especially in the AI industry” (Tubaro 2021, 939; along similar lines, Newlands 2021a, b). Gray and Suri (2019) term this human work invisible to the outside but necessary for “automated” processes to function, “ghost work”.
A key lesson from the sociology of work perspective is that it is impossible to make overly general statements about platform work (Schor et al. 2020). Different types of workers operate under different conditions and have vastly different experiences. This is true for instance of job quality. Micro-tasking crowdwork can be hugely injurious on emotional and mental health (Hermosillo and Deng 2021), for instance for workers screening violent content. A lot of crowdwork and gigwork is highly flexible, has non-regular hours, which can have a negative impact on work–life balance and affect physical and mental health and personal relations as a result (Muntaner 2018; Allan et al. 2021). The feature Vallas and Schor highlight most is the devolution of control in many platform environments, allowing workers to enjoy more autonomy in the completion of their tasks and the relationship with clients. Service work for an Uber driver for instance is far less scripted than service work in more traditional settings. Flexible hours might also be viewed as increased autonomy. But what can in some conditions amount to an increase in autonomy can also mean a lack of training, notably around OHS issues. Workers are left to their own devices in many gig environments. By the same token, however, AI can also complement work in some professions, notably by taking away the routine aspects of the job. Workers then focus on the more creative, or rewarding parts of the work.
6 Conclusion: the politics of AI work
Amongst the many potential societal impacts of AI, this review focuses on those that affect the world of work. If, as a result of AI deployment in work, technological unemployment does occur at a significant scale, or wealth polarisation, or crowdwork and app-work become widespread employment models (so far they are not), bringing further precariousness in employment conditions, then the impact on current social organisations will be significant (Acemoglu and Restrepo 2020b). This is because current social systems continue to be organised on the basis of full-time employment in jobs attracting good wages as the condition for the complete enjoyment of social and economic rights, including full health and pension coverage. One of the consequences of the deployment of AI could be further stress being put on already ailing systems of social protection (Konkolewsky 2017).
In response to the possibility of social crisis resulting from the new industrial revolution, many progressive thinkers advocate turning this threat into an opportunity for a radical transformation of social and economic organisation (Caruso 2018), through some version of a basic income (Susskind 2020), leaving behind the modern work ethic (Weeks 2011), indeed harnessing full automation for a leap into “luxury communism” (Bastani 2018). Outside this blue-sky literature, we find a spectrum of normative answers to AI challenges. At one end are philosophical studies of the “ethics” of AI, which tend to overlook macro-economic and social contexts and focus instead on the ethical norms embedded in particular AI processes and machines (Floridi 2014 as an informative exception). Other social-scientific approaches, seen for example in Arogyaswamy (2020) and Stahl et al. (2022), combine descriptive and programmative foci. Much of the normative discussion in these literatures is in the form of: “governments ought to do x, y, z”, or “such and such regulatory principles ought to be enshrined in AI development”. Typical in this respect is the last part of Susskind’s A World Without Work (2020). To counteract the deleterious effects of technological unemployment, Susskind canvasses a “big state” as the solution. This is a state that imposes high taxes on elite workers who manage to remain relevant in a depleted labour market, on privileged individuals inheriting wealth, on big business and on what Marx called “constant capital” (machines); a state that introduces a conditional basic income; and that shapes the ways in which individuals fill their leisure time to find meaning and purpose. As a blueprint, Susskind’s proposal is detailed and seems consistent, but it is utterly unrealistic in current ideological and geopolitical conditions, particularly in the few countries spearheading AI innovation. Ideal-scenario policy recommendations might have their usefulness, but the immense gap between the propositions and economic and political reality points to the need for another kind of approach, one that focuses precisely on what makes these developments of AI doubtful, at least in the short term.
Recent research in political science and labour law focuses on these aspects (Prassl 2018). What is at stake is the clash between the two imperatives noted at the outset and the plausibility of a “good AI”, or “AI for good”. Regarding the capitalist imperative, a number of arm wrestles are underway between a few immensely powerful corporations and the actors representing workers: activists, trade unions, cities and regions affected by AI (typically by platforms such as Airbnb or Uber), branches of national governments, multinational organisations like the EU, international bodies, like the ILO. One major battle concerns the legal status of gig workers, whether platforms can divest themselves of all responsibilities regarding the people they employ. For the US, the research of labour law expert Veena Dubal is particularly significant (for instance Dubal 2021). Other battles are about the right to privacy of workers (De Stefano 2020 for Europe) and the development of regulatory frameworks to give an ethical frame to AI development (Salento 2018). AI companies, notably the largest platforms, are actively counteracting these efforts, through direct and indirect political interventions, intervening in formal legislative processes through lobbying, challenging legal decisions, mobilising their customer base (Schor and Vallas 2020; Thelen 2018; Collier et al. 2017, 2018), or even directly flexing their digital muscle against cities, regions and even nation states (such as Australia in 2021, Quinn 2021).
Regarding the nationalist imperative, one might be skeptical of the willingness of states to enforce principles of “good AI” when they are engaged in high stakes contests over geopolitical hegemony. One aspect of the struggle concerns the control of the supply chains and production factors behind the manufacturing of semiconductors (Hwang 2018). Given the rhetoric used by the main actors, it is difficult to see ethical considerations having much sway for some of the possible extensions of AI.
What these extraneous, economic and political, parameters indicate is that AI does not by itself determine ‘good’ nor ‘bad’ outcomes for the world of work. Rather, what matters is the kind of world in which AI will be developed and deployed.
References
Accenture (2016) Why Artificial Intelligence Is the Future of Growth
Acemoglu D (Ed.) (2021) Redesigning AI. MIT Press
Acemoglu D, Autor D (2011) Skills, tasks and technologies: Implications for employment and earnings. Handbook Labor Econom Elsevier 4:1043–1171
Acemoglu D, Restrepo P (2018a) Low-skill and high-skill automation. J Hum Cap 12(2):204–232
Acemoglu D, Restrepo P (2018b) The race between man and machine: Implications of technology for growth, factor shares, and employment. Am Econ Rev 108(6):1488–1542
Acemoglu D, Restrepo P (2019) Automation and new tasks: How technology displaces and reinstates labor. J Econ Perspect 33(2):3–30
Acemoglu D, Restrepo P (2020a) Unpacking skill bias: Automation and new tasks. aea Papers and Proceedings
Acemoglu D, Restrepo P (2020b) The wrong kind of AI? Artificial intelligence and the future of labour demand. Camb J Reg Econ Soc 13(1):25–35
Acemoglu D, Autor D, Hazell J, Restrepo P (2020) AI and jobs: Evidence from online vacancies. Nat Bur Econ Res
Akter S, McCarthy G, Sajib S, Michael K, Dwivedi YK, D’Ambra J, Shen K (2021) Algorithmic bias in data-driven innovation in the age of AI. Elsevier
Alarie B, Niblett A, Yoon AH (2018) How artificial intelligence will affect the practice of law. Univ Tor Law J 68:106–124
Allan BA, Autin KL, Wilkins-Yel KG (2021) Precarious work in the 21st century: A psychological perspective. J Vocat Behav 126:103491
Allen GC (2019) Understanding China's AI strategy: Clues to Chinese strategic thinking on artificial intelligence and national security, Center for a New American Security Washington, DC
Amorim H, Moda F (2020) Work by app: Algorithmic management and working conditions of Uber drivers in Brazil. Work Organ Labour Global 14(1):101–118
Applebaum HA (1992) The Concept of Work: Ancient, medieval, and modern. SUNY Press
Araujo T, Helberger N, Kruikemeier S, de Vreese CH (2020) In AI we trust? Perceptions about automated decision-making by artificial intelligence. AI Soc 35(3):611–623
Arntz M, Gregory T, Zierahn U (2019) Digitization and the future of work: Macroeconomic consequences. Handbook Labor Hum Resour Popul Econ 1:1–29
Arntz M, Gregory T, Zierahn U (2016) The risk of automation for jobs in OECD countries: A comparative analysis
Arogyaswamy B (2020) "Big tech and societal sustainability: an ethical framework." AI & society: 1
Attewell P (1990) What is skill? Work Occup 17(4):422–448
Autor DH, Dorn D (2013) The growth of low-skill service jobs and the polarization of the US labor market. Am Econ Rev 103(5):1553–1597
Autor DH (2015) Why are there still so many jobs? The history and future of workplace automation. J Econ Perspect 29(3):3–30
Autor DH, Handel MJ (2013) Putting tasks to the test: Human capital, job tasks, and wages. J Law Econ 31(S1):S59–S96
Autor DH, Levy F, Murnane RJ (2003) The skill content of recent technological change: An empirical exploration. Q J Econ 118(4):1279–1333
Autor DH, Katz LF, Kearney MS (2008) Trends in US wage inequality: Revising the revisionists. Rev Econ Stat 90(2):300–323
Autor DH, Dorn D, Hanson GH (2015) Untangling trade and technology: Evidence from local labour markets. Econ J 125(584):621–646
Autor DH (2014) Polanyi's paradox and the shape of employment growth. Nat Bur Econ Res
Bales RA, Stone K (2020) The invisible web at work: artificial intelligence and electronic surveillance in the workplace. Berkeley J Employm Labor Law 41:1
Barbieri L, Mussida C, Piva M, Vivarelli M (2020). "Testing the employment and skill impact of new technologies." Handbook Labor, Hum Resour Popul Econ 1–27
Barton D, Woetzel J, Seong J, Tian Q (2017) "Artificial intelligence: implications for China."
Bastani A (2018) Fully Automated Luxury Communism. Verso
Benanav A (2020) Automation and the Future of Work. Verso
Benjamin R (2019) Assessing risk, automating racism. Science 366(6464):421–422
Benzell SG, Kotlikoff LJ, LaGarda G, Sachs JD (2015) Robots are us: Some economics of human replacement, Nat Bur Econ Res
Berger T, Chen C, Frey C B (2017) Cities, Industrialization and Job Creation: Evidence from Emerging Countries, pp 1–25. Mimeo, Oxford Martin School
Berman BJ (1992) Artificial intelligence and the ideology of capitalist reconstruction. AI & Soc 6(2):103–114
Bessen James E, Stephen Michael Impink, Lydia Reichensperger, Robert Seamans (2018) “The Business of AI Startups.” Boston University, School of Law, Law and Economics Research Paper 18–28
Bhave DP, Teo LH, Dalal RS (2020) Privacy at work: A review and a research agenda for a contested terrain. J Manag 46(1):127–164
Bordot F, Lorentz A (2021) Automation and labor market polarization in an evolutionary model with heterogeneous workers (No. 2021/32). LEM Working Paper Series
Bowles J (2014) The Computerisation of European Jobs, blog, 24 July, Bruegel, http://bruegel.org/2014/07/the-computerisation-of-european-jobs/
Brito DL (2020) Automation Does Not Kill Jobs. Rice University, It Increases Inequality
Brooks C, Gherhes C, Vorley T (2020) Artificial intelligence in the legal sector: pressures and challenges of transformation. Camb J Reg Econ Soc 13(1):135–152
Bruun EP, Duka A (2018) "Artificial intelligence, jobs and the future of work: Racing with the machines." Basic Income Studies 13(2)
Brynjolfsson E, McAfee A (2014) The second machine age: Work, progress, and prosperity in a time of brilliant technologies, WW Norton & Company
Brynjolfsson E, Rock D, Syverson C (2019) "Artificial Intelligence and the Modern Productivity Paradox: A Clash of Expectations and Statistics", in The Economics of Artificial Intelligence edited by A. Agrawal, J. Gans, and A. Goldfarb, University of Chicago Press
Brynolfsson E, McAfee A (2011) Race against the Machine. Mass: Digital Frontier Press, Lexington
Buchanan D, Badham R (2020) Power, Politics, and Organizational Change. SAGE Publications, United Kingdom
Budd JW (2011) The thought of work. Cornell University Press
Buera FJ, Kaboski JP, Rogerson R (2015) Skill biased structural change, Nat Bur Econ Res
Bughin J (2020) Artificial Intelligence, Its Corporate Use and How It Will Affect the Future of Work. Capitalism, Global Change and Sustainable Development, Springer: 239–260
Caruso L (2018) Digital innovation and the fourth industrial revolution: epochal social changes? AI Soc 33(3):379–392
Casilli AA (2019) En attendant les robots-Enquête sur le travail du clic, Média Diffusion
Cheng M, Foley C (2019) Algorithmic management: The case of Airbnb. Int J Hosp Manag 83:33–36
Chessell D (2018) The jobless economy in a post-work society: How automation will transform the labor market. Psychosociol Issues Hum Resour Manag 6(2):74–79
Chesta RE. (2021) A New Labor Unionism in Digital Taylorism? Explaining the First Cycle of Worker Contention at Amazon Logistics. In Digital Supply Chains and the Human Factor, pp 181–198. Springer, Cham
Clarke L, Winch C (2006) A European skills framework?—but what are skills? Anglo-Saxon versus German concepts. J Educ Work 19(3):255–269
Clifton J, Glasmeier A, Gray M (2020) When machines think for us: the consequences for work and place. Oxford University Press UK 13:3–23
Cockburn IM, Henderson R, Stern S (2019) The impact of artificial intelligence on innovation. The economics of artificial intelligence: An agenda, 115–152
Collier RB, Dubal VB, Carter CL (2017) Labor platforms and gig work: the failure to regulate.Work. Pap. 106–17, Inst. Res. Labor Employ., Univ. Calif., Berkeley. http://www.irle.berkeley.edu/files/2017/Labor-Platforms-and-Gig-Work.pdf
Collier RB, Dubal VB, Carter CL (2018) Disrupting regulation, regulating disruption: the politics of Uber in the United States. Perspect Politics 16(4):919–937
Collins P (2020) The Right to Privacy, surveillance-bysoftware and the “Home-Workplace”. In: UK Labour Law Blog, 3 September. Available at:https://uklabourlawblog.com/2020/09/03/the-right-to-privacy-surveillance-bysoftware-and-the-home-workplace-by-dr-philippa-collins/ (accessed 17 March 2021)
Corbett CJ (2018) How sustainable is big data? Prod Oper Manag 27(9):1685–1695
Coveri A, Cozza C, Guarascio D (2021) Monopoly capitalism in the digital era. No. 2021/33. LEM Working Paper Series
Danaher John (2019) Automation and Utopia. Human Flourishing in a World without Work, Harvard University Press
Datta A, Tschantz MC, Datta A (2015) Automated experiments on ad privacy settings. Proc Priv Enh Technol 2015(1):92–112
Daugherty PR, Wilson HJ (2018) Human+ machine: Reimagining work in the age of AI. Harvard Business Press
Davenport Thomas H (2018) The AI Advantage: How to Put the Artificial Intelligence Revolution to Work. MIT Press
David PA (1990) The dynamo and the computer: an historical perspective on the modern productivity paradox. Am Econ Rev 80(2):355–361
De Stefano V (2019) Negotiating the Algorithm: Automation, Artificial Intelligence, and Labor Protection. Comp Lab L & Pol'y J 41:15
De Stefano V (2020) Algorithmic bosses and what to do about them: automation, artificial intelligence and labour protection. In Economic and policy implications of artificial intelligence, pp 65–86. Springer, Cham
De Vries GJ, Gentile E, Miroudot S, Wacker KM (2020) The rise of robots and the fall of routine jobs. Labour Econ 66:101885
Delfanti A, Frey B (2021) Humanly extended automation or the future of work seen through Amazon patents. Sci Technol Hum Values 46(3):655–682
Deranty J-P (2021) Post-work society as an oxymoron: Why we cannot, and should not, wish work away. Eur J Soc Theory. https://doi.org/10.1177/13684310211012169
Deshpande A, Picken N, Kunertova L, DeSilva A, Lanfredi G, Hofman J (2021) Improving working conditions using Artificial Intelligence. European Parliament Policy Department for Economic, Scientific and Quality of Life Policies
Dhar P (2020) The carbon impact of artificial intelligence. Nat Mach Intell 2(8):423–425
Dinerstein AC, Pitts FH (2021) A World Beyond Work?: Labour. Emerald Group Publishing, Money and the Capitalist State Between Crisis and Utopia
Duggan J, Sherman U, Carbery R, McDonnell A (2020) Algorithmic management and app-work in the gig economy: A research agenda for employment relations and HRM. Hum Resour Manag J 30(1):114–132
Dunphy D, Stace D (1993) The strategic management of corporate change. Hum Relations 46(8):905–920
Ebben M (2020) Automation and Augmentation: Human Labor as Essential Complement to Machines. IGI Global, pp 1–24, https://doi.org/10.4018/978-1-7998-2509-8.ch001
Ebert I, Wildhaber I, Adams-Prassl J (2021) Big Data in the workplace: Privacy Due Diligence as a human rights-based approach to employee privacy protection. Big Data Soc 8(1):20539517211013052
Eglash R, Robert L, Bennett A, Robinson KP, Lachney M, Babbitt W (2020) Automation for the artisanal economy: enhancing the economic and environmental sustainability of crafting professions with human–machine collaboration. AI Soc 35(3):595–609
Ekbia HR, Nardi BA (2017) Heteromation, and other stories of computing and capitalism. MIT Press
Ernst E, Merola R, Samaan D (2019) Economics of artificial intelligence: Implications for the future of work. IZA J Labor Policy 9(1):1–35
Eubanks B (2022) Artificial Intelligence for HR: Use AI to Support and Develop a Successful Workforce. Kogan Page, United Kingdom
Felten EW, Raj M, Seamans R (2018) A method to link advances in artificial intelligence to occupational abilities. In: AEA Papers and proceedings, vol 108, pp 54–57
Felten EW, Raj M Seamans R (2019) "The occupational impact of artificial intelligence: Labor, skills, and polarization." NYU Stern School of Business
Feenberg A (1991) Critical theory of technology. Oxford University Press, New York
Feenberg A (2002) Transforming technology: A critical theory revisited. Oxford University Press
Feenberg A (2012) Questioning technology. Routledge
Felten EW, Raj M, Seamans R (2019) The occupational impact of artificial intelligence: Labor, skills, and polarization. NYU Stern School of Business
Felten E, Raj M, Seamans R (2021) "Occupational, industry, and geographic exposure to artificial intelligence: A novel dataset and its potential uses." Strategic Manag J
Fleming P (2019) Robots and organization studies: Why robots might not want to steal your job. Organ Stud 40(1):23–38
Floridi L (2014) "Technological unemployment, leisure occupation and the human project. Philos Technol 27:143–150
Floridi L, Cowls J (2019) "A unified framework of five principles for AI in society." Available at SSRN 3831321
Ford M (2015) Rise of the Robots. Cornell University Press, New York
Ford (2021) Rule of the robots: how artificial intelligence will transform everything. UK: John Murray Press
Frank MR, Autor D, Bessen JE, Brynjolfsson E, Cebrian M, Deming DJ, Feldman M, Groh M, Lobo J, Moro E (2019) Toward understanding the impact of artificial intelligence on labor. Proc Natl Acad Sci 116(14):6531–6539
Frey CB (2019) The technology trap. Princeton University Press
Frey CB, Osborne MA (2017) The future of employment: How susceptible are jobs to computerisation? Technol Forecast Soc Chang 114:254–280
Fukumura YE, Gray JM, Lucas GM, Becerik-Gerber B, Roll SC (2021) Worker perspectives on incorporating artificial intelligence into office workspaces: implications for the future of office work. Int J Environ Res Public Health 18(4):1690
Gal U, Jensen TB, Stein MK (2020) Breaking the vicious cycle of algorithmic management: A virtue ethics approach to people analytics. Inf Organ 30(2):100301
Galiere S (2020) When food-delivery platform workers consent to algorithmic management: a Foucauldian perspective. N Technol Work Employ 35(3):357–370
Gandini A (2019) Labour process theory and the gig economy. Hum Relat 72(6):1039–1056
Georgieff A, Milanez A (2021). "What happened to jobs at high risk of automation?"
Gillham J, Rimmington L, Dance H, Verweij G, Rao, A, Roberts KB, Paich M (2018) The macroeconomic impact of artificial intelligence. PricewaterhouseCoopers Report
Goos M, Manning A (2007) Lousy and lovely jobs: the rising polarization of work in Britain. Rev Econ Stat 89(1):118–133
Goos M, Manning A, Salomons A (2014) Explaining job polarization: Routine-biased technological change and offshoring. Am Econ Rev 104(8):2509–2526
Goos M, Konings J, Vandeweyer M (2015) Employment growth in Europe: The roles of innovation, local job multipliers and institutions. Local Job Multipliers and Institutions Discussion paper no.50 (Belgium, Vives)
Göranzon B, Josefson I (2012). Knowledge, skill and artificial intelligence, Springer Science & Business Media
Gordon RJ (2015) Secular stagnation: A supply-side view. Am Econ Rev 105(5):54–59
Gordon RJ (2014) The demise of US economic growth: restatement, rebuttal, and reflections (No. w19895). Nat Bur Econ Res
Gorz A (2011) Critique of economic reason, Verso Books
Graetz G, Michaels G (2018) Robots at work. Rev Econ Stat 100(5):753–768
Gray ML, Suri S (2019) Ghost work: how to stop silicon valley from building a new global underclass. Boston: Eamon Dolan Books
Gries T, Naudé W (2018) Artificial intelligence, jobs, inequality and productivity: Does aggregate demand matter? IZA DP no. 12005
Gruetzemacher R, Paradice D, Lee KB (2020) Forecasting extreme labor displacement: A survey of AI practitioners. Technol Forecast Soc Chang 161:120323
Gruetzemacher R, Dorner FE, Bernaola-Alvarez N, Giattino C, Manheim D (2021) Forecasting AI progress: A research agenda. Technol Forecast Soc Chang 170:120909
Günther M, Kasirzadeh A (2022) Algorithmic and human decision making: for a double standard of transparency. AI Soc 37(1):375–381
Gurkaynak Gonenc (2019) Algorithms and Artificial Intelligence: An Optimist Approach to Efficiencies (October 1, 2019). Competition Law & Policy Debate Journal (Volume 5, Issue 3, 2019)
Haenlein M, Huang MH, Kaplan A (2022) Guest Editorial: Business Ethics in the Era of Artificial Intelligence. J Bus Ethics 1–3
Halal W, Kolber J, Davies O, Global T (2017) Forecasts of AI and future jobs in 2030: Muddling through likely, with two alternative scenarios. J Futur Stud 21(2):83–96
Hanley D, Hubbard S (2020) Surveillance damages relationships between workers. Stress and anxiety triggered by surveillance can increase the probability of injury in Eyes Everywhere: Amazon's Surveillance Infrastructure and Revitalizing Worker Power. Open Markets
Heinrichs B (2022) Discrimination in the age of artificial intelligence. AI Soc 37(1):143–154
Hermosillo A, Deng XN (2021) Flexibility in Disguise: Crowdwork Risks from the Worker Perspective. AMCIS 2021 TREOs. 34
Holm JR, Lorenz E (2021) "The impact of artificial intelligence on skills at work in Denmark." New Technol Work Employment
Huang A, Chao Y, de la Mora Velasco E, Bilgihan A, Wei W (2021) When artificial intelligence meets the hospitality and tourism industry: an assessment framework to inform theory and management. J Hosp Tour Insights
Huang H (2021) Algorithmic management in food‐delivery platform economy in China. New Technol Work Employment
Hwang T (2018) Computational Power and the Social Impact of Artificial Intelligence. arXiv:1803.08971
International Labor Organization (ILO) (2018) The impact of technology on the quality and quantity of jobs. Global Commission on the Future of Work
Jarrahi MH, Newlands G, Lee MK, Wolf CT, Kinder E, Sutherland W (2021) Algorithmic management in a work context. Big Data Soc 8(2):20539517211020332
Joyce S, Stuart M (2021) Digitalised management, control and resistance in platform work: a labour process analysis. In Work and Labour Relations in Global Platform Capitalism. Edward Elgar Publishing
Kalleberg AL, Vallas SP (Eds.) (2017) Precarious work. Emerald Group Publishing
Kalleberg AL, Vallas SP (2018) Probing precarious work: Theory, research, and politics. Res Sociol Work 31(1):1–30
Kellogg KC, Valentine MA, Christin A (2020) Algorithms at work: The new contested terrain of control. Acad Manag Ann 14(1):366–410
Komlosy Andrea (2018) Work: The last 1,000 years. Verso Books
Konkolewsky HH (2017) Digital economy and the future of social security. Administration 65(4):21–30
Korinek A, Stiglitz JE (2019) 14. artificial intelligence and its implications for income distribution and unemployment, University of Chicago Press
Lambrecht A, Tucker C (2019) Algorithmic bias? An empirical study of apparent gender-based discrimination in the display of STEM career ads. Manage Sci 65(7):2966–2981
Lane M, Saint-Martin A (2021) "The impact of Artificial Intelligence on the labour market: What do we know so far?"
Leicht-Deobald U, Busch T, Schank C, Weibel A, Schafheitle S, Wildhaber I, Kasper G (2019) The challenges of algorithm-based HR decision-making for personal integrity. J Bus Ethics 160(2):377–392
Lewis P, Bell K (2019) Understanding the UK’s productivity problems: new technological solutions or a case for the renewal of old institutions?. Empl Relations 41(2):296–312. https://doi.org/10.1108/ER10-2018-0273
Lobel O (2018) Coase and the Platform Economy. Forthcoming in Sharing Economy Handbook 2018, Cambridge University Press, Nestor Davidson, Michele Finck & John Infranca eds., San Diego Legal Studies Paper No. 17–318
Loi M, Ferrario A, Viganò E (2020) "Transparency as design publicity: explaining and justifying inscrutable algorithms." Ethic Inform Technol 1–11
Lu Y, Zhou Y (2021) "A review on the economics of artificial intelligence." J Econ Surveys
Lucivero F (2020) Big data, big waste? A reflection on the environmental sustainability of big data initiatives. Sci Eng Ethics 26(2):1009–1030
Luksha P et al (2015) Atlas of Emerging Jobs. Skolkovo, Moscow
Makridakis S (2017) The forthcoming Artificial Intelligence (AI) revolution: Its impact on society and firms. Futures 90:46–60
ManpowerGroup (2017) The Skills Revolution: Digitalization and Why Skills and Talent Matter, (Geneva)
Manyika J, Chui M, Miremadi M, Bughin J, George K, Willmott P, Dewhurst M (2017) A future that works: AI, automation, employment, and productivity. McKinsey Global Institute Research. Tech Rep 60:1–135
Mateescu A, Nguyen A (2019). "Algorithmic management in the workplace." Data & Society: 1–15
Mellacher P, Scheuer T (2020) "Wage Inequality, Labor Market Polarization and Skill-Biased Technological Change: An Evolutionary (Agent-Based) Approach." Computational Economics: 1–46
Moghaddam Y, Yurko H, Demirkan H, Tymann N, Rayes A (2020) The future of work: how artificial intelligence can augment human capabilities, business expert press
Möhlmann M, Henfridsson O (2019) What people hate about being managed by algorithms, according to a study of Uber drivers. Harvard Business Review, 30
Mokyr J (2018) The past and the future of innovation: lessons from economic history. Explor Econ Hist 69:13–26
Moore P, Woodstock J (2021) Augmented Exploitation. Pluto Press, Artificial Intelligence, Automation and Work
Morgan FE, Boudreaux B, Lohn AJ, Ashby M, Curriden C, Klima K, Grossman D (2020) Military applications of artificial intelligence: ethical concerns in an uncertain world. Rand Project Air Force Santa Monica United States
Mueller G (2021) Breaking Things at Work: The Luddites are Right about why You Hate Your Job, Verso Books
Muller Z (2019) Algorithmic harms to workers in the platform economy: The case of Uber. Columbia J Law Soc Prob 53:167
Munoz JM, Naqvi A (2018) Business strategy in the artificial intelligence economy. Business Expert Press
Muntaner C (2018) Digital platforms, gig economy, precarious employment, and the invisible hand of social class. Int J Health Serv 48(4):597–600
Nedelkoska L, Quintini G (2018) "Automation, skills use and training."
Neufeind M, O'Reilly J, Ranft F (2018) Work in the Digital Age: Challenges of the Fourth Industrial Revolution, Rowman & Littlefield International
Newlands G (2021a) Algorithmic surveillance in the gig economy: The organization of work through Lefebvrian conceived space. Organ Stud 42(5):719–737
Newlands G (2021b) Lifting the curtain: Strategic visibility of human labour in AI-as-a-Service. Big Data Soc 8(1):20539517211016024
Nieswandt K (2021) Automation, basic income, and merit. Routledge, The Politics and Ethics of Contemporary Work, pp 102–119
Nishant R, Kennedy M, Corbett J (2020) Artificial intelligence for sustainability: Challenges, opportunities, and a research agenda. Int J Inf Manage 53:102104
Obermeyer Z, Powers B, Vogeli C, Mullainathan S (2019) Dissecting racial bias in an algorithm used to manage the health of populations. Science 366(6464):447–453
Park H, Ahn D, Hosanagar K, Lee J (2021). Human-AI Interaction in Human Resource Management: Understanding Why Employees Resist Algorithmic Evaluation at Workplaces and How to Mitigate Burdens. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, pp 1–15
Paschkewitz J, Patt D (2020) Can AI make your job more interesting? Issues Sci Technol 37(1):74–78
Pasquale F (2015) The Black Box Society: The Secret Algorithms That Control Money and Information. Harvard University Press
Payne J (2000) The unbearable lightness of skill: the changing meaning of skill in UK policy discourses andsome implications for education and training. J Educ Policy 15(3):353–369
Perrig L (2021) "Manufacturing Consent in the Gig Economy", in Moore and Woodstock, Augmented Exploitation
Petropoulos G (2018) "The impact of artificial intelligence on employment." Praise for Work in the Digital Age 119
Pfeiffer S (2016) Robots, Industry 4.0 and humans, or why assembly work is more than routine work. Societies 6(2):16
Polak P, Nelischer C, Guo H, Robertson DC (2020) “Intelligent” finance and treasury management: what we can expect. AI Soc 35(3):715–726
Pouliakas K (2018) "Determinants of automation risk in the EU labour market: A skills-needs approach"
Prassl J (2018) Humans as a Service: The Promise and Perils of Work in the Gig Economy. OUP, Oxford
Purcell C, Brook P (2020) At least I’m my own boss! Explaining consent, coercion and resistance in platform work. Employment and Society, Work
Quinn C (2021) "Facebook vs. Australia: What Happens When Big Tech Comes for the News?" Foreign Policy
Rahman HA (2021) The invisible cage: Workers’ reactivity to opaque algorithmic evaluations. Adm Sci Q 66(4):945–988
Rikap C (2021) Capitalism, Power and Innovation: Intellectual Monopoly Capitalism Uncovered (1st ed.). Routledge
Roberts H, Cowls J, Morley J, Taddeo M, Wang V, Floridi L (2021) The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation. AI Soc 36(1):59–77
Salento A (2018) Digitalisation and the regulation of work: theoretical issues and normative challenges. AI Soc 33(3):369–378
Saniee I, Kamat S, Prakash S, Weldon M (2017) Will productivity growth return in the new digital era. Bell Labs Techn J 22:1–18
Savage N (2020) The race to the top among the world’s leaders in artificial intelligence. Nature 588(7837):S102–S102
Scarpetta S (2018) The future of work: Advancing labor market resilience. J Int Aff 72(1):51–57
Schlund R, Zitek E (2021). Who's My Manager? Surveillance by AI Leads to Perceived Privacy Invasion and Resistance Practices. In Academy of Management Proceedings (Vol. 2021, No. 1, p. 11451). Briarcliff Manor, NY 10510: Academy of Management
Schmidt FA (2017) Digital labour markets in the platform economy. Mapp Polit Challeng Crowd Work Gig Work 7:2016
Schor JB, Attwood-Charles W, Cansoy M, Ladegaard I, Wengronowitz R (2020) Dependence and precarity in the platform economy. Theory Soc 49(5):833–861
Schwab K (2017) The fourth industrial revolution, Currency
ServiceNow (2017) Today’s State of Work: At the Breaking Point, (California)
Shapiro A (2018) Between autonomy and control: Strategies of arbitrage in the “on-demand” economy. New Media Soc 20(8):2954–2971
Smith JE (2020) Smart Machines and Service Work: Automation in an Age of Stagnation, Reaktion Books
Solow R (1987) "We'd better watch out", New York Times Book Review, July 12, 1987
Spencer DA (2018) Fear and hope in an age of mass automation: debating the future of work. N Technol Work Employ 33(1):1–12
Spencer D, Slater G (2020) No automation please, we’re British: technology and the prospects for work. Camb J Reg Econ Soc 13(1):117–134
Spenner KI (1990) Skill: meanings, methods, and measures. Work Occup 17(4):399–421
Squicciarini M, Staccioli J (2022). Labour-saving technologies and employment levels: Are robots really making workers redundant? OECD Science, Technology and Industry Policy Papers 124, OECD Publishing
Srnicek N (2017) Platform Capitam. Polity Press, Germany
Srnicek N, Williams A (2016) Inventing the Future (revised and Updated Edition): Postcapitalism and a World Without Work. Verso Books, Spain
Stahl BC, Antoniou J, Ryan M, Macnish K, Jiya T (2022) Organisational responses to the ethical issues of artificial intelligence. AI Soc 37(1):23–37
Stamper R (1988) Pathologies of AI: Responsible use of artificial intelligence in professional work. AI Soc 2(1):3–16
Steinhoff J (2021) Automation and Autonomy: Labour, Capital and Machines in the Artificial Intelligence Industry. Springer International Publishing, Germany
Strubell E, Ganesh A, McCallum A (2020) Energy and policy considerations for modern deep learning research. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, no 9, pp 13693–13696
Strubell E, Ganesh A, McCallum A (2019) Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243
Sundararajan A (2017) The sharing economy: The end of employment and the rise of crowd-based capitalism. MIT press
Susskind RE, Susskind D (2015) The Future of the Professions: How Technology Will Transform the Work of Human Experts. Oxford University Press
Susskind D (2020) A world without work: Technology, automation and how we should respond, Penguin UK
Thelen K (2018) Regulating uber: the politics of the platform economy in Europe and the United States. Perspect Politics 16(4):938–953
Thomas SL, Nafus D, Sherman J (2018) Algorithms as fetish: Faith and possibility in algorithmic work. Big Data Soc 5(1):2053951717751552
Tinbergen J (1974) "Substitution of graduate by other labour." Kyklos: international review for social sciences
Tsamados A, Aggarwal N, Cowls J, Morley J, Roberts H, Taddeo M, Floridi L (2022) The ethics of algorithms: key problems and solutions. AI Soc 37(1):215–230
Tubaro P, Casilli AA (2019) Micro-work, artificial intelligence and the automotive industry. J Indust Bus Econ 46(3):333–345
Tubaro P, Casilli AA, Coville M (2020) The trainer, the verifier, the imitator: Three ways in which human platform workers support artificial intelligence. Big Data Soc 7(1):2053951720919776
Tubaro P (2021) "Disembedded or deeply embedded? A multi-level network analysis of online labour platforms." Sociology: 0038038520986082
Vallas S, Schor JB (2020) What do platforms do? Understanding the gig economy. Ann Rev Sociol 46:273–294
Van de Gevel AJ, Noussair CN (2013) The nexus between artificial intelligence and economics. SpringerBriefs in Economics. Springer, Berlin, Heidelberg
Van Rijmenam M (2019) The Organisation of Tomorrow: How AI, blockchain and analytics turn your business into a data organisation. Taylor & Francis
van Wynsberghe A (2021) Sustainable AI: AI for sustainability and the sustainability of AI. AI Ethic 1(3):213–218
Veen A, Barratt T, Goods C (2020) Platform-capital’s ‘app-etite’for control: A labour process analysis of food-delivery work in Australia. Work Employ Soc 34(3):388–406
Von Krogh G (2018) "Artificial intelligence in organizations: New opportunities for phenomenon-based theorizing." Academy of Management Discoveries
Wajcman J (2017) Automation: is it really different this time? Br J Sociol 68(1):119–127
Wall LD (2018) Some financial regulatory implications of artificial intelligence. J Econ Bus 100:55–63
Wang P (2019) On defining artificial intelligence. J Artif General Intellig 10(2):1–37
Weeks Kathi (2011) The problem with work. Duke University Press
Wilson HJ, Daugherty P, Bianzino N (2017) The jobs that artificial intelligence will create. MIT Sloan Manag Rev 58(4):14
World Economic Forum (2018) The Future of Jobs.
Xie M, Ding L, Xia Y, Guo J, Pan J, Wang H (2021) Does artificial intelligence affect the pattern of skill demand? Evidence from Chinese manufacturing firms. Econ Model 96:295–309
Zajko M (2021) Conservative AI and social inequality: conceptualizing alternatives to bias through social theory. AI Soc 36(3):1047–1056
Zhang D, Pee LG, Cui L (2021) Artificial intelligence in E-commerce fulfillment: A case study of resource orchestration at Alibaba’s Smart Warehouse. Int J Inf Manage 57:102304
Zhou G, Chu G, Li L, Meng L (2020) The effect of artificial intelligence on China’s labor market. China Econ J 13(1):24–41
Zuboff S (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power: Barack Obama's Books of 2019, Profile
Acknowledgements
Research for this review was funded by the Australian Research Council, DP190103116.
Funding
Open Access funding enabled and organized by CAUL and its Member Institutions.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Deranty, JP., Corbin, T. Artificial intelligence and work: a critical review of recent research from the social sciences. AI & Soc (2022). https://doi.org/10.1007/s00146-022-01496-x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s00146-022-01496-x