Keywords

11.1 Artificial Intelligence at Work: Five Workers Stories

Over the past decade, the impact of AI on the future of work has been the subject of much research by academics, governments, experts, non-governmental organisations, professional federations, international organisations, philosophers, essayists and others. It is not the intention here to list them all. However, from a worker’s point of view, we can organise the anticipated effects of AI into five categories [13].

The replaced worker: AI systems will massively replace workers and destroy jobs. Several studies or essays tend to show that many jobs will disappear (more than 40%), with the machine performing tasks more efficiently and at lower cost than humans [6, 7, 12, 14]. Adopting a “job-based approach”, they estimate that many occupations are at high risk of automation. Other studies prefer a “task-based approach” [5] focusing on the complementarities between automation and labour.Footnote 1 From this perspective, AI will destroy few jobs (around 10% depending on the country) but will transform many occupations (around 50%).

The dominated worker: AI systems will dominate workers by reducing their empowerment. Beyond the “technological singularity” hypothesis, many studies are concerned about the effects of AI on workers' autonomy, due to the development of an “algocracy”. But the dominated worker does not only result from active forms of domination. It can also result from the worker's passivity in the way they interact with the system: overconfidence, the contentment effect (being satisfied with a relatively satisfactory solution obtained without effort) and overcautiousness can disengage the worker, reduce their expertise and consequently increase their dependence on the system.

The augmented worker: Workers’ empowerment is strengthened by AI. Combined with AI, the enhanced human being reaches a level of performance normally unattainable, thanks to a good partnership between man and machine, with man bringing his true added value [18, 19].

The divided worker: “winner-takes-all-economy”, the polarisation of labour. Many studies suggest that AI may polarise the labour market. On the one hand, an “aristocracy of intelligence” with a high level of complementarity with artificial intelligence occupy highly qualified and stimulating jobs. On the other hand, workers in low-skilled jobs have precarious and uninteresting work [1, 2, 4, 8, 16, 17] (Graham and Woodcock 2019).

The rehumanised worker: workers focus on properly human skills. The automation of tasks and trades could be an opportunity for the “de-automation” of human work. It would allow the development of human capacities: creativity, manual dexterity, abstract thinking, problem solving, adaptability, emotional intelligence [10, 19].

However, in recent years, several companies have started to integrate AI into their organisations and professions, thus moving away from speculative approaches.

11.2 From Stories to Real Cases: What Working with AI Could Mean

Building a Collection of Real Use-Cases of AI Systems in the Workplace

The Global Partnership on Artificial Intelligence (GPAI) has decided to launch the creation of a global catalogue of real-world AI use cases at work.Footnote 2 By exploring the state-of-the-art and capabilities of AI in the workplace, the Future of Work Working Group seeks to provide critical technical analysis that will contribute to the collective understanding of

  • how AI can be used in the workplace to empower workers and increase productivity, how workers and employers can prepare for the future of work;

  • how job quality, inclusiveness and health and safety can be preserved.

Since September 2020, we have started to collect stories from different actors in AI (providers, CEO, managers and end(users) who are involved in its implementation at work and organisations, in order to:

  • better understand the motivations of those who integrate AI into organisations and work;

  • better understand how AI is deployed in the field;

  • highlight the issues and social effects of AI integration;

  • highlight the convergences and divergences in the feedback according to the nature of the respondents;

  • highlight “good practices” from the field that could outline a method for implementing AI.

The answers to the questions refer to a specific professional application of AI (and not to an AI system in general). Indeed, many questions relate to uses, organisational and social contexts or design methods. After the first year of research, the catalogue consists of 150 use cases, spread over 12 countries.Footnote 3

11.2.1 Five Workers Stories Put to Test of Real World

The replaced worker: In almost all the cases studied, AI systems are not intended to automate an entire process or task. Rather, the goal is to improve the performance of the human worker. In this sense, respondents emphasise the notion of a “decision-making tool” and that the final decision is always human. “There is the human-in-the-loop and human makes the final decision. The AI alerts and recommends only” (Private—SME—FinTech—Machine Learning, NLP—Automation of the surveillance). The reasons are not ethical but are related to “probabilistic” or “empirical” AI. AI systems built on this type of algorithm provide only probabilities based on limited knowledge of the environment and contexts. Humans can provide this information and correct possible errors in order to make the decision. Therefore, the value generated by AI systems in organisations would not come from increasing the organisation’s control over its human resources by automating work. It would not come from increasing the power of the organisation through machines or processes. The value created by AI systems would come from trust in human work. “There are two approaches to AI: one which values the worker, one where he is excluded, because he is fragile and limited. Either we build trust, or we build control. This gives a moral compass on a path that can be paved with rupture. Putting the human at the centre is an incantatory discourse…it must not be said, it must be done. University engineering departments must open up more to the humanities, question their political responsibility” (Non-profit—SME—Start-up on Augmented intelligence—Computer vision with standard and specific methodologies—Increasing the speed of fault analysis). According to the respondents, current AI is nothing more and nothing less than a human decision support tool. In this sense, they believe that AI's capacities are highly overestimated, which simultaneously generates irrational fears and hopes and, sometimes, frustration. “It is more about having a machine plus human system, it is better both for efficiency and acceptability. In the end you will always keep a human in the process, mainly because of the high amount of spending decision in case there is a problem. You have someone to complain to if anything goes wrong. The difference is that AI makes different mistakes than humans, and sometimes they also seem stupid ones. There are some people that imagine a magic wand and they have impossible expectations but in the end the experts’ knowledge is needed, and everything is aligned” (Private—SME—Energetic—Defect detection, failure detection—AI for image processing and defect detection on industrial structures. Qualification of defects on wind turbines). Almost ten years after Frey and Osborne’s first prediction, it is hard to say that AI has caused a massive wave of task automation.

The dominated worker: Few use cases are mature enough to observe algocracy situations. However, three tendencies emerge:

  • Algorithmic management situations in warehouses. Voice order solutions called “voice picking” totally control the picker. They receive their instructions via a headset, and dialogue directly with the information system through a microphone and voice recognition software. The voice software solution interfaces with the warehouse management system or directly with the sales management system. The promoters of this solution praise the productivity gains, the reduction of the error rate, the reliability of the organisation, the elimination of the hand-held order support: the user works hands and eyes free. “Negative results predominate because qualification and work experience are no longer necessary because the AI takes over, explains a German trade unionist. Only a short period of training is required to use the system. With regard to the quality of the work, there was a simplification (the system speaks all languages, knows all calculations, processes and products); the AI thus led to a de-qualification of the workers because no previous knowledge was required. There is no further development of the workers because it is not necessary and also impossible, e.g., the workers unlearn calculating and product knowledge. The result can be described as the dumbing down of the workers. They act like machines” (Private—SME—Food Industry—Perception/audio processing—Increase pickers' productivity in a warehouse with a pick by voice-system).

  • A growing influence of processes on practices: The more prescribed the work, the better the applications perform. "The information that our tool delivers is that which the methods have previously structured. The more our clients have filled in their processes, the more complete our system is. When they don't have these series of instructions, the first step is to help them formalize them” (Private—SME—Software development—Natural Language Processing—Optimising machine usage through speech or text interaction).

  • Governance by numbers [24]: When AI systems communicate through numbers, indicators, probabilities, workers have difficulties to position themselves. Some people trust them absolutely. The presumed efficiency of artificial intelligence is in this sense the most recent formalisation of this old dream of “harmony through calculation” where mathematics is the key to the intelligibility of the world. In other situations, workers would like to intervene, but they do not know how: “what was different compared to other tools is that we know more or less how it works, whereas here, there was a real lack of clarity about the results, how the tool obtained a result” (Public—Big firm—Public Administration—Machine learning—Identifying errors).

The augmented worker: In the absence of total automation of a task, current AI systems are more like decision-support systems. Total automation by AI is blocked by the impossibility to guarantee the performance levels of a system. AI is not certifiable, an imperative condition for the automation of a critical industrial process. We can distinguish four forms of augmentation of the worker by AI:

  • Augmentation-remediation: AI allows the worker to do what he doesn’t know how to do. “This AI application addresses a cybersecurity problem: too many documents produced by different departments. Humans are no longer able to tag them. This is a data governance problem. Because of the evolution of professional practices, one can be identified anywhere in the world with confidential data. We need to secure data outside its traditional security perimeter” (Private—Big firm—AI development for cybersecurity—Web service—Securing data outside its traditional security perimeter). AI systems can also be used to compensate for the limitations of some workers: “It stops doing this activity manually in Excel and to start recording the data with a voice recognition system. Saving time, reduction of errors, greater control are three examples of benefits. We can tell if we are within the tolerance levels, within the standards. If not, the operator has to say why. The academic level of the workers is low with poor literacy. The use of voice is empowering” (Private—SME—Software—NLP—Facilitate quality).

  • Augmentation-rationalisation: AI allows workers of different skill levels to reach a more homogeneous result. AI system “aims to accumulate knowledge so a young worker or newly assigned technician can handle work that requires the knowledge of a highly skilled technician (highly skilled, can operate specialized equipment, etc.) This AI application supports the quality of work equivalent to that of highly skilled workers by incorporating the tacit knowledge of highly skilled workers into AI” (Private—Big firm—General construction—Deep learning—Raising the skills of young workers).

  • Augmentation-delegation: AI relieves the worker from low value-added tasks and refocuses on high value-added tasks. “The AI system is very good at detecting welding anomalies, but much less good at qualifying these anomalies. The value of the worker has shifted from detecting problems to qualifying problems” (Private—Big Firm—Computer Vision—Object Recognition).

  • Augmentation-cooperation: The association of the worker and the AI produces a new performance. A human/AI association from which would emerge a worker equipped with new capacities, a “synthesis of the best of man and machine”: “The main idea is to look for the bottleneck in calculation programs, where computation times take longer, and replace that part of the code by a digital twin. There is a compromise to be made between precision and time saving. Some people want more speed than accuracy, and others the opposite. Sometimes it is better to know results in 1 h for instance instead of 3 weeks” (Private—SME—Data sciences—Deep Learning—Digital software twins to increase the speed of calculations).

The divided worker: The systems can effectively feed a polarisation of work by generalising the expertise of the most competent and experienced workers. But the impact of AI systems on human expertise is heterogeneous.

  • A shift in value that can reinforce the status of the business expert: AI systems, by shifting human work to high value-added tasks, strengthen the position of experts. “It changes the organisation with automation of the handling phase and refocusing on the reading and statistical analysis of the results. This was possible because the operators were experienced” (Non-profit—Big firm—Agrifood—Cobot, Visual recognition, Adaptive learning for movements—Cobot that increases worker productivity by refocusing them on high value-added tasks).

  • An association “novice + AI system” that can weaken the status of the business expert. With AI systems, business experts become less indispensable in the long term, after the design and training of the AI system which becomes more autonomous. It strengthens managers’ positions. “We know that the people who do this have a very high added value, but that's not reassuring, they don't want the adjustment to rest on them. It’s a critical operation that we do regularly. […] The intelligence was in the machine, the managers wanted to be able to put anyone on the task. The person was the hand of the application” (Private, Big Firm-Aircraft industry- Door positioning assistance system).

The rehumanised worker: The most striking “rehumanisation” effect is the automation of repetitive tasks considered as having little added value, such as answering emails. “In this case, we had a huge social issue. The simple processing of e-mails represented 6–8 h of activity per day. Many wanted to change jobs. Our system now has 94% successful email routing. Now they can focus on their job, accounting analysis. They still answer emails for two hours a day. But these are dedicated, complex, interesting requests that require their expertise” (Private, Big Firm-Energy- chatbot for accountants). Cobots can also relieve workers of tasks that generate musculoskeletal disorders: “Automation of repetitive tasks with high mental load, reduction of musculoskeletal disorders: the operators put the products to be tested in boxes, the cobot recovers, scans, checks in its database that the product “on hand” is the one to be tested. It opens the product and duplicates the protocol of an operator until the end of the preparation phase. The analysis phase remains human”.

Beyond the five stories, what emerges strongly from our survey is above all how, in their current phase of development, the professional applications of AI are profoundly shaped by work and workers. In the majority of cases studied, AI systems consist of generalising expertise. Thus, as was previously the case for expert systems, the actors of the profession are essential to design and improve AI systems. “The companion has a very important role because we rely on him to educate AI; without his feedback, we are blind. I try to remain humble because I fundamentally believe in the intrinsic value of professions. I'm talking more about “increased intelligence” than AI, and that's what understanding is all about. You can’t work without domain experts. That's why all the big AI companies are recruiting trade experts. Unsupervised learning bricks are specific and redundant, always start with an expert system approach” (Non-profit—SME—Start-up on Augmented intelligence—Computer vision with standard and specific methodologies—Increasing the speed of fault analysis).

11.3 Discussion: AI, Organisation, Workers and Safety

Machine learning is the current main approach of what is called “empirical AI”. This means that it does not produce deterministic or certifiable results like classical machines, but works on the basis of statistics from which it derives correlations. These correlations establish probabilities. In high-hazard organisations, these probabilities will have to coexist with a very normative culture. It will be necessary, for example, to estimate the value of a prediction in a structured environment. It is well known that workers do not always comply with prescriptions, and that procedural deviations are essential for the proper functioning of organisations. However, on the one hand, AI can reinforce the prescribed work (algocracy), on the other hand, its functioning remains empirical. Moreover, the balances that workers will be able to find in their interactions with machines will be unstable, as machines will keep improving. In the end, one of the challenges for safety will be to manage situations of paradoxical injunctions: the worker will be asked to trust an AI system while controlling it and assuming responsibility for the whole.

Considering workers, three shifts seem to be particularly structuring. These three shifts converge to reconfigure the forms of engagement in work:

  • Shift 1: the object of work could be put at a distance. The worker will do less and supervise more programs and machines. What will be the impact of this distancing on workers' consideration of safety?

  • Shift 2: These programs and machines will then become fully fledged objects of the activity. AI applications need to be trained, completed and corrected. Workers could become less expert in the situations they have to solve than in the machines that solve the situations. How can we organise this work on AI so that it optimises safety?

  • Shift 3: these two shifts will impact the construction of the self at work, professional identity, and the recognition of singularity. What will be the place of safety in this identity reconfiguration?

From the point of view of safety, we need to understand how these new forms of subjective commitment will contribute positively or negatively. The example of the automation of aircraft piloting provides elements of an answer: it has generally made flights safer, but it has also increased human error, particularly by depriving pilots of the sensations and perception of flight [9]. In response, aircraft manufacturers are working on two completely different avenues: increasing automation and improving the relationship between humans and machines [22]. Security could be faced with the same kind of questions in the near future.