Sustainability has become a new mantra, not only for intellectual property law, but as a horizontal theme that penetrates all areas of life, policy and – also – law. A commonly used definition of the term is that developed by the Brundtland Commission, defining sustainability as “meeting the needs of the present without compromising the ability of future generations to meet their own needs.”Footnote 1 This definition suggests individual and collective restraint as an intergenerational obligation and a responsible use of limited resources. But what is the role of law to achieve sustainability, especially intellectual property law, which governs “resources” that are not scarce?

The role of the law for sustainable societies is, first and foremost, a supporting one. The law should enable, not disable, sustainable practices and it should put limits to the extent scarce resources can be appropriated. The World Intellectual Property Organization, more specifically, links intellectual property as a “critical incentive” for innovation on the way to realizing the Sustainable Development Goals.Footnote 2 There is, it can be argued, another dimension to sustainability and intellectual property law. Not only should intellectual property law serve as an incentive to generate ideas that help humanity to live and act more sustainably, but the law itself must be sustainable. In other, simpler words, the law must be perceived as acceptable and legitimate, and so must be the process of its creation. The normative shape of the law must reflect societal consensus on how to address current and future challenges. The emerging challenges posed by artificial intelligence (AI) serve as an a urgent reminder that rules are necessary, but that they must also have legitimacy.

In an open letter on 22 March 2023, an eclectic mix of technology luminaries, public intellectuals, academics and other citizens called for a voluntary pause – even a government-mandated moratorium – on “the training of AI systems more powerful than Chat-GPT 4” for six months.Footnote 3 The letter quickly accumulated more than 20,000 signatures with its call for governance systems to control the use of AI to enable the use of “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal” AI systems. From this letter radiated the fear that AI’s harms will outweigh its benefits; that its immense and ever-growing capabilities would be difficult to control. The letter also expressed the concern, albeit implicitly, that national and regional legislators had not yet taken up the challenge to reign in AI technology to let it find its place in our normative systems. Published alongside the letter were policy recommendations, which suggested external auditing of specific AI systems and institutional oversight bodies, amongst other measures.

The safety and reliability of AI systems is only one side of the coin. The availability of good training data is essential to make such systems acceptable, or, to put it differently, the sustainability of AI technology and its various applications is dependent on access to a wide range of data and information. A few days after the publication of the letter calling for a moratorium on the training of powerful AI systems, representatives from the oldest collective representation of published authors lobbied the US Congress to intervene in the field of AI.Footnote 4 The perceived danger stemming from AI was different from that expressed by the authors and signatories of the open letter. Rather than fearing uncontrollable and harmful AI systems, the Author’s Guild warned against the displacement of human-generated creativity by indistinguishable machine-generated creations. This existential threat to human creativity, it was argued, must be addressed through law – copyright law in particular.

Whilst, at least in Europe, a policy process to regulate the potential direct harmful effects of AI is already underway, a dedicated policy process to deal with other collateral damaging effects of AI-technology has not been initiated. Moreover, there are other areas in which AI technology is starting to create uncertainty, discomfort, even early-onset resignation.

The acceptance of a regulatory and governance framework for AI will, to a significant extent, depend on its acceptance by those who are directly or indirectly affected by AI systems. These are, on the one hand, the creators of AI applications who make the investments in technology that may or may not benefit society.Footnote 5 On the other hand are those affected by the use of AI, positively as well as negatively. And finally, somewhere in the middle are those that help to train AI with data that can take any shape and form, including that of works protected by copyright. Authors and other creators of all sorts, by making their works accessible online, voluntarily or involuntarily surrender their works to train the machines that might one day replace them, or so are the fears expressed by many. It is here where intellectual property law will most likely play the biggest role in contributing to a sustainable legal framework.

Alleviating those fears will be crucial in order to create acceptance for a new regulatory environment that – one can reasonably assume – will shape significant parts of our lives in the future. For that purpose, the(se) legal and regulatory framework(s) will have to be able to withstand criticism; in the same sense that AI systems should be explainable,Footnote 6 so should the legislation that governs it. The legislative choices that will have to be made must be embedded in a normative framework that communicates with other related legal frameworks – including intellectual property law. While the substantive rules will largely differ, because of their contextual specificity, communication between legal frameworks can best be achieved via more general principles that guide the shaping and application of substantive rules.

On 24 April 2023, the European Copyright Society in an open letter to the EU Commissioner for Internal Market, Thierry Breton, expressed the hope “that further policy initiatives reflect clear general principles on future European copyright law.”Footnote 7 Developing these principles – to begin with – in intellectual property law would be a first step in the direction of a sustainable legal framework for AI.

One approach to identify the principles that could shape the regulation of a transformative technology such as AI is to examine existing rules that directly or indirectly shape its use. For example, Arts. 3 and 4 of the Directive on copyright and related rights in the Digital Single MarketFootnote 8 determine the conditions under which lawful text and data mining can be performed. A clear normative distinction can be observed between the different purposes of, generally condoned, mining of copyright protected works and other subject matter. While mining activity is permitted under an exception, only uses for the purposes of scientific research are permitted without the possibility that the rightholders reserve such uses, presumably to create a market for AI training data. This distinction between different purposes reflects a sense of economic fairness: users who create value with data provided by others can be expected to remunerate the creators of such (training) data.

Similarly, the draft AI Act, in its explanatory memorandum, underlines, based on good practices, that “AI development and use should be guided by certain essential value-oriented principles.”Footnote 9 Shaping of principles takes time and requires reflection and discourse. And whilst the true and perceivable impact of AI on every part of society is becoming more and more visible, a certain disillusionment relating to the use of AI and its benefits might soon emerge. For example, while the generation of images with DALL-E might have been an enchanting experience for many, the use of ChatGPT for “academic” and other “educational” purposes is a more sobering experience. This makes it is all the more important is to discuss and shape the principles, or adopt existing ones, that will govern AI technology in the future. This is not an exercise that can be performed, or a task completed within six months, but a process that will gradually have to engage policymakers, academics, industry and society in general.

The sectoral institutions that are slowly emerging as a result of EU legislation, such as Digital Service Coordinators under the DSA and “notifying authorities” under the draft AI Act will have an important role to play in lending legitimacy to AI governance and oversight.Footnote 10 Their role is crucial not only to administer and support the enforcement of intellectual property and other rights where AI technology will play an important role. These institutions must also shape policy discussions since they are uniquely positioned to observe the impact of and the difficulties faced by new technologies and their users. Of course, these institutions must work transparently and be accountable, i.e. they must themselves be legitimate and subject to institutional oversight. It is indispensable, that the interlinking mechanisms and processes that lend legitimacy and make this legal and institutional framework sustainable are carefully elaborated.

Public acceptance and perceived legitimacy of intellectual property systems and related policy processes are essential. For intellectual property, and copyright in particular, this means that certain discussions must be reopened. The European Copyright Society queries why a remuneration right has been established for press publishers but not for non-research-related text and data mining.Footnote 11 This is a legitimate question in the absence of compelling evidence that one activity cannot be sustainably performed without a compensation system, while the other is subject to market forces. It should at least be comprehensible why certain underlying principles, when moulded into substantive rules, result in different outcomes in relation to very similar problems.

For intellectual property systems to be, or rather become, sustainable, they must have legitimacy, which can best be achieved by policy choices that can be explained. Their acceptance within society is essential to achieve compliance with and trust in substantive legal rules. Furthermore, the usefulness of sustainability as a policy goal for intellectual property law and policy depends on a proper understanding of its imperatives. It is necessary to be realistic about the role intellectual property law can play in shaping sustainable economies, and to consider seriously the negative implications if law ignores the interests of those that sustain essential technologies. The societal bargain reflected in intellectual property law must be considered carefully and requires occasional readjustments. Reactive policymaking – or worse, wandering in the dark without a guiding light – has the potential to undermine the trust in intellectual property law; anticipatory law-making is difficult, but can be facilitated by structural principles that guide the legislator. To regulate aspects of AI through intellectual property law it is instrumental to consider the elaboration of these principles as the starting point of a process that can lead increased sustainability and acceptance of new technologies and their governing legal framework.