While AI has been the subject of intense discussion and research in the recent past, the recent breakthroughs in generative AI mark the beginning of a new era and are raising existential questions for humanity. Decades of dedicated R&D into AI have culminated in the launch of powerful generative pre-trained transformers such as ChatGPT, Claude, DALL-E, Midjourney or Stable Diffusion. These have been made widely and freely available to anyone with an internet connection, outside of any specific regulatory framework. Unsurprisingly, their adoption has been almost instantaneous.

Based on large language models (LLM) and trained on large amounts of data, generative AI has the ability to generate not only text but also images, music and computer code. Across the world, experts are trying to forecast the impact of this powerful technology. Goldman Sachs Research claims it will contribute substantially to global economic growth, depending on its capability and adoption timeline. In a study published in April 2023, it estimates this contribution to be around 7 percent of the annual global GDP. The study also anticipates significant job disruption, with roughly two-thirds of US occupations expected to be exposed to some degree of AI automation. For those, the share of the workload potentially replaceable by AI ranges from 25 to 50% (Briggs and Kodnani 2023).

Historically, all new technologies have impacted workers, collectively and individually. Past industrial revolutions, up to and including the recent digital revolution, have triggered a rather mechanical transfer of workload from humans to machines, not dissimilar from simple automation. Generative AI is a game-changer. It has the ambition to perform cognitive functions, do creative work, learn from experience and teach itself how to solve problems. It represents the first step of a shift from narrow AI to general AI.

If and when that shift materialises, it will profoundly affect the nature of work as we know it. Experts predict that companies will be able to decompose existing jobs into bundles of tasks and assess the extent to which each bundle can be entrusted to narrow AI, generative AI and, potentially, general AI (Eloundou et al 2023). After all, task bundling and allocation algorithms already exist to ensure an effective redistribution—from the employer point of view—of the workload.

All jobs that involve performing creative tasks, to any extent, are exposed, in all sectors of the economy, with skilled segments of the labour force particularly at risk. Many jobs risk being stripped out of what makes them worthy and justify paying someone to perform them: why recruit workers if their work can be broken down into bite-size tasks and outsourced to generative AI systems, without losing the human creativity and inventiveness? An increasingly polarised and disrupted labour market in which humans have to compete against AI and against each other becomes a real possibility.

This is a purposefully extreme and provocative depiction of what may happen. To ensure such a reality never materialises, action is needed on the two sides of the equation: generative AI must be regulated, and the labour market must be made more resilient.

First, if and when technology companies act irresponsibly by launching products into the market with limited safeguards, as OpenAI did with ChatGPT, regulation is needed to protect society, in particular workers, against unwanted consequences. While the EU’s AI Act will provide some protection in the EU, a more global regulatory effort is needed to avoid regional unbalances, and specific measures should be implemented to protect workers. The precautionary principle can and should be a fundamental pillar of such an effort.

Self-regulation through ethical guidelines, voluntary standards or codes of conduct, whose supporters claim are more reactive and adapted to rapidly changing technologies, gives private interests near-total freedom to decide what rules to apply. This is counterproductive and the sudden advent of ChatGPT and related tools has de facto demonstrated the lack of effectiveness of self-regulation.

Second, the labour market must be made more resilient and better equipped to mitigate the impact of disruptive AI systems. Every day, across the world, individuals, companies and public authorities are embracing untested generative tools that produce unreliable content at best, and more often than not disinformation or discriminatory results. Every day, human workers risk being replaced by imperfect and vulnerable generative technologies. Such “shocks” undermine a labour market that, despite past predictions to the contrary, remains profoundly unequal. Millions of workers, particularly in manual and physical jobs, in sectors such as cleaning, construction, extraction or maintenance, still suffer from various social and labour inequalities, often associated to job precariousness. No labour market can resist the transformative power of generative AI and be resilient if pockets of inequality and precariousness remain.

With generative AI, millions of other workers risk falling into that same precariousness. If the labour market evolves into a task-based environment, upskilling and reskilling will be essential for humans to stay ahead of the AI pack. Are our education systems ready to address this challenge? Are companies, as key providers of skills training, willing to train their workers rather than buy AI tools to replace them?

Third, time has come for a serious debate about the role of technology and automation in society and its promised ability to make our lives at work better, easier and healthier. Keeping this promise implies giving workers the ability to prosper whilst working less. This should prompt discussions about work modalities —when do we start to work, how long do we work per day and week, when do we retire?— and a global reflexion about the notion of work itself and its value. In turn, this should trigger efforts to strengthen trade unions, able to defend worker's rights, especially in precarious occupations and sectors that have not traditionally benefited from union support, including tech workers, freelancers, designers, many knowledge workers and "gig economy" workers.

Finally, any genuine dialogue between employers and workers about the introduction of generative AI in companies is still quasi non-existent. Too many questions remain undiscussed: should generative AI be banned in certain companies and sectors? To what extent should workers follow AI recommendations? How autonomous should workers remain when working alongside generative AI? What to do in case of mistakes or biased outputs?

What to think of the recent call to “turn AI six months off”? The Open Letter ‘Pause Giant AI Experiments’, organised by the Future of Life Institute and calling for an immediate pause for a period of 6 months in the training of AI systems more powerful than GPT-4, has collected more than 30,000 signatures. It states that Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources.

Whatever one may think about this initiative, it has had the merit of attracting the attention of the general public, has publicly highlighted essential concerns about advanced forms of AI, and shows that the governance of AI deserves our full attention. So does strengthening the labour market and ensuring that human creativity, not machine-generated content, remains one of its core features.