1 Introduction

The pervasiveness of AI-based applications in many dimensions of human and social life has generated a prolific ethical debate. From ethical investigations about the impact of immediate micro-interactions, such as using a search engine or a chatbot, to professional contexts, such as diagnostic AI in medical contexts and automated risk assessments, and structural issues of its use, such as predictive policing; the “ethics of AI” has become an integrative part of developing and using AI. Until recently, however, relatively little research has been committed to the ethics of AI’s potential impact on, and contribution to, the structural conditions of human life. By this, we mean the pervasive and potentially lasting effect of integrating AI technologies into societies, with far reaching economic, social, political, and also environmental consequences. These effects can also be discussed under the label of “sustainability”. The present paper contributes to this debate with the aim to pave the way for a realistic assessment of both the potential benefits of AI to contribute to “sustainability” that avoids exaggerated expectations, and of the limitations of “sustainable AI” to meaningfully and lastingly contribute to improving human lives and living together. This makes our reflection on “sustainable AI” part of an expansion of AI ethics into a wider normative debate about AI justice.

The paper is organised in the following way: Chapter 2 introduces the notion of “sustainability”, offers a short overview about its recent use in the debates about AI, and explains the importance of providing further conceptual clarifications and distinctions about the notion of sustainability in the context of AI. Chapter 3 discusses the use of AI to advance the end of sustainability in both its environmental and its social dimension; while chapter 4 discusses the environmental and social costs and the potential sustainability of using AI-based tools. Chapter 5 integrates the prior discussion about the different dimensions of sustainability in the context of AI into a set of criteria that allow for a multi-dimensional assessment of a particular AI application as sustainable or not. To account for the complexities in the subject matter, we propose to distinguish between a thin and a thick understanding of sustainability: while thin sustainability acknowledges that an AI is sustainable in some way (and therefore ultimately does not vindicate the assessment as being sustainable overall), only thick sustainability allows for a comprehensive verdict of an AI as genuinely sustainable. In consequence, we recommend using the notion “sustainability” more carefully and sparingly. Only the more ambitious goal of “thick” sustainability can guide the use of AI-based technologies to meaningfully contribute to actual improvements of human lives and living together. Current conditions of a growth-oriented economy, however, may undermine the possibility to develop “thick” sustainable AI.

2 Sustainable AI?

Sustainability is a complex notion that, despite its popularity, retains ambiguities.Footnote 1 Since its introduction into modern and contemporary debates about development and the environment in 1987 by the “Brundtland Report” (named after the chairwoman of the World Commission on Environment and Development sponsored by the United Nations), sustainability is standardly understood as the characteristic of meeting “the needs of the present without compromising the ability of future generations to meet their own needs” [46]. The Brundtland report thus bases sustainability on the importance of meeting the multiple different needs of people, enriched by a strong intergenerational claim: the needs of future generations must not count any less than those of the present generation.Footnote 2

Often, sustainability is understood as comprising different dimensions or “pillars”: an environmental, a social and also an economic one [36].Footnote 3 All of them are said to be interrelated with mutual influences between them, and thus need to be taken into consideration jointly when it comes to assessing and securing sustainability [5, 16]. Among the numerous ambiguities in the notion “sustainability” (from Brundtland to the UN’s sustainable development goalsFootnote 4), the unclarified hierarchy between the presumed pillars stands out: How should they be weighted against one another? Which one should be given priority in case of conflict? Considering them as co-equals gives rise to numerous problems when it comes to actual assessments or decisions, because trade-offs will often be necessary [35]: Is the preservation of natural resources or environmental systems more important than or can it be discounted against achievements regarding social justice, such as a reduction of poverty? And what exactly is the role of sustainable economic growth: Which amount of environmental or social costs can be justifiably incurred to allow for continued economic growth? These and related questions that are relevant for the understanding and use of the notion “sustainability” from its introduction into the debate emerge again today in discussing the presumed sustainability of AI-based technologies. In this paper, however, we focus our discussion of “sustainable AI” on the dimensions of environmental and social sustainability.Footnote 5

Another ambiguity is exposed by the question of what exactly can be considered sustainable—or not. Originally introduced in the Brundtland report as a qualitative feature of development (“sustainable development”), the application of the notion of sustainability in an adjectival or adverbial sense was rather specific. However, calling particular interventions or technologies sustainable, for example, did not fall under this original focus. Neither did the original use include the abstract noun of sustainability per se to refer to a particular state of affairs. While such an expansion obviously connects to the original use and is now firmly established in contemporary discourse, glossing over the distinctions between different forms of adjectival or substantive uses of the concept may open the concept to problematic uses, making a precise definition of the notion imperative.

Today, talk about artificial intelligence is increasing and becoming ubiquitous: AI has become a fashionable and even hyped topic. It is frequently seen as an all-purpose tool to address challenges of all kinds: economic, social, and also environmental. The European Commission’s High-Level Expert Group on Artificial Intelligence, for example, declared AI to be “a promising means to increase human flourishing, thereby enhancing individual and societal well-being and the common good, as well as bringing progress and innovation. In particular, AI systems can help to facilitate the achievement of the UN’s Sustainable Development Goals, such as promoting gender balance and tackling climate change, rationalising our use of natural resources, enhancing our health, mobility and production processes […]” [24]. Given that the present is shaped by numerous significant and complex challenges—such as a dramatic climate crisis, increasing concern for social and economic injustice, etc.—it is not surprising that AI is also discussed and appreciated with regard to its “sustainability,” in particular as a potential means to address the mentioned challenges and to advance the cause of sustainable development [26, 44]. Not least, being able to label something “sustainable” significantly increases the likelihood of political and public acceptance. Here is an example: The Ethical Principles listed in the Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems, published by the European Commission’s European Group on Ethics in Science and New Technologies (EGE), emphasise not only the principle of “human dignity” as a central demand for the use of AI, but also “sustainability.” Sustainability, here, is understood as the demand that ethical AI technologies “be in line with the human responsibility to ensure the basic preconditions for life on our planet, continued prosperity for mankind and preservation of a good environment for future generations. Strategies to prevent future technologies from detrimentally affecting human life and nature are to be based on policies that ensure the priority of environmental protection and sustainability.” [14]. In this and similar statements, the environmental dimension of sustainability takes centre stage, but “prospering for mankind” also highlights the social dimension of sustainability.

To structure the debate about “sustainable AI” in its multifaceted use, we initially take up a distinction between two main dimensions of sustainability in the context of AI proposed by van Wynsberghe [45]. In the context of the looming environmental crisis, AI has been cast as a promising tool or a means in combating the devastating consequences of the crisis, e.g., by allowing us to find smart strategies in mitigating the changes in the world climate and its effects on human and other life, and adapting our built environments to function in this changing world climate (cf. e.g. [8]). Such a use of AI for sustainability that focuses on sustainability as an end is the first dimension. The other dimension addresses the sustainability of AI technologies by asking whether AI as a technological practice is in and of itself sustainable, e.g., does not waste resources (cf. e.g. [42]) or does not create avoidable amounts of toxic waste [10]. This perspective considers the sustainability of AI as a means or tool itself that can be deployed to achieve certain ends.

Van Wynsberghe concludes her discussion by defining “sustainable AI” as “a movement to foster change in the entire lifecycle of AI products (i.e., idea generation, training, retuning, implementation, and governance) towards greater ecological integrity and social justice” [45]. Differing from her perspective, we do not inquire into “sustainable AI” as a “movement” but take a conceptual perspective when taking up her basic distinction between the sustainability of AI and AI for sustainability and develop it further. In the following, we outline a matrix to distinguish different dimensions of sustainability that apply to both the sustainability of AI and AI for sustainability and that provide a more-nuanced and justified basis for a comprehensive assessment of whether any use of AI can count as sustainable or not. Such a more fine-grained assessment is necessary (a) to avoid from a theoretical perspective conceptual confusion resulting from an impoverished and simplistic understanding and use of the notion, and (b) to avoid from a practical perspective any possible instrumentalisation or exploitation of such conceptual confusion. After all, only an adequate understanding of “sustainable AI” allows us to distinguish appropriate from inappropriate use of this label. This is important, in particular because, as we will show, strong political and strategic interest exist towards being able to label something as “sustainable”, which can increase the likelihood of social and political acceptance, and thus its widespread implementation and use. Given that strong economic and private interests currently shape the development and use of AI, it is imperative to have a clear understanding of what counts as genuinely sustainable, lest this notion be abused for different types of ethics-washing (cf. also [17]).

3 AI for sustainability

In a widely cited study, Vinuesa et al. [44] analyse the potential impact AI applications can have on the sustainable development goals (SDGs). They conclude that current AI technology, if used at scale, can contribute to the achievement of 90% of environmental goals, 70% of economic goals, and 82% of social goals. These bold assessments suggest that AI applications can have an enormous impact on increasing the sustainability of life on earth. Such claims, however, need to be scrutinised carefully.

3.1 AI for environmental sustainability

Some examples of the utility of AI give rise to hope that this is a reliable estimate and have carried the discourse that many “green AI” applications are indeed worth developing and applying on a larger scale. The ability of AI to help count trees based on data from unmanned aerial vehicles [2] or give more precise estimates of biodiversity in remote areas [41] appears to be appealing cases of how AI can genuinely and effectively contribute to achieving sustainable development goals.

However, to avoid cherry-picking cases that create a biased picture about the prowess of AI for environmental sustainability, we ought to investigate the potential side-effects of using AI to achieve singular environmental SDGs.

First, there is a risk of focussing on using AI for achieving one SDG alone. Vinuesa et al. do not specify how the enabled targets of AI in one pillar (environmental, economic, and social) can, in practice, disable or inhibit the achievement of targets in the other respective columns. If an AI application, e.g., for analysing strategies for protecting biodiversity, recommends the abandonment of long-inhabited, culturally relevant settlements (and thus calls for a displacement of some peoples), the net effect on sustainability on earth may be zero or negative. Not very much is gained for the issue of sustainability on earth if one SDG is achieved by AI at the expense of another. Evidently, not every attempt in utilising AI to achieve an SDG will lead to a violation of another SDG. However, the possibility of these competing interests occurring ought to be accounted for when creating AI applications dedicated to achieving singular sustainability goals.

Second, we may consider that within a market-based system, increases in efficiency (understood as increasing the return or output at similar levels of investment or input, e.g., in energy) could lead to lowering the price of that output, with a subsequent increase in demand and consumption. This rebound effect of innovation (cf. e.g. [34]) can occur in different areas of using AI applications, as their contributions are often framed as process-streamlining or optimization. Take, e.g., the ever improving efficiency of car engines. An AI system that can optimise the fuel consumption of such engines according to a person’s driving habits, road conditions, and other data can reasonably claim to be “green” as it saves on gas and thus reduces emissions. However, we may anticipate a rebound effect (lower energy costs for fuel can lead to more people driving more often) that can lead to quite the opposite effect of saving greenhouse gas emissions (e.g. [32]. Without a wider view on how a rebound effect in, e.g., energy consumption can have adverse effects on sustainability goals, considering an improvement of a process in itself sustainable risks of falling short of sustainable development goals.

Third, the worry of paving the way for a “green Leviathan” has been introduced by Coeckelbergh [8, 9]. Ordering all forms of human activities under the requirement of climate adaptation or mitigation opens the possibility for an infringement of legitimate freedoms. This partially echoes our first point, as the sole focus on “sustainability” as a collection of “green” criteria for AI is a misconception about the cost of social sustainable development goals. Coeckelbergh ultimately rejects the dichotomy of “climate action” versus “free societies” as a consequence of an increased use of AI for sustainability. Yet, the risk of using “AI for sustainability” as a step towards increased power-concentration in the name of climate action gains an even more concerning dimension if we consider that AI in itself might not be as powerful to achieve these climate goals as often presented.

Fourth, promoting the idea that marginal improvements of processes can count as meaningful contributions to make the world a more sustainable place to live can have secondary effects on how the general project of a sustainable earth is viewed. These may include a false sense of sufficient engagement, or overall toothless attempts in GHG emission reduction. The conviction that we may find enough AI applications that optimise or compensate for all the unsustainable processes and behavioural habits and lifestyles we perform can be identified as a form of technosolutionism (cf. [31]). However, considering how little actual AI applications have thus far contributed to any national sustainable development goal, and how the resource consumption and emission of GHG is concentrated in a few very rich nations of the global North, we have little reason to believe that technology alone, without fundamental structural changes in the lives of many, can fix the world’s current sustainability challenges.

3.2 AI for social sustainability

Next to efforts to demonstrate the utility of AI for environmental sustainability goals, a growing number of researchers suggest the utility of AI for social sustainability—Vinuesa et al. suggest that 82% of social sustainability goals may be advanced using AI. These are often intended to counteract the decidedly unsustainable effects of some algorithms, like the recommendation algorithm on social media sites leading to political radicalisation [40]. However, we may indeed find that some AI applications can help achieve goals of sustainable development, thus allowing AI to be used for social sustainability (cf. [4, 39]). For example, organising agricultural resources to avoid famines in vulnerable regions [49] or making high-quality education more accessible worldwide can count as using AI for social sustainability.

However, we may see similar issues as we have with AI for environmental sustainability when isolating these contributions as sustainable. Consider the first point from earlier: without contextualization of the pursued goal, we may miss a potential conflict in ends (i.e., pursuing one sustainability goal at the expense of another). This occurs when the implementation of an AI application intended to achieve a specific goal, e.g., a large language model aimed at increasing and aiding intercultural communication, fails to consider its environmental impact.

An even bigger concern might be the risk of a technosolutionist mindset for social issues. While many of the current social sustainability efforts are responses to preventing or decreasing the harm caused by AI in the first place (take, for example, rampant disinformation aided by recommendation algorithms and the aim to reduce disinformation using other AI [47], attempts in making society more sustainable through technology presupposes that many current social issues are just a technological fix away. Such a presupposition, however, is blind to some fundamental social injustices that cannot be erased using more or different technology, but require low-tech or even no-tech social and political interventions instead (cf. [12], [21]).

In sum, “AI for sustainability” can, at its best, serve as one tool among many to improve some processes, mostly by detecting previously unknown correlations and increasing efficiencies in uses of resources. This is not to say that these processes cannot have a measurable impact, and thus, they should be recognized as such. However, it is imperative to keep the limitations of such an approach in mind that pursue the optimization of processes of a system that might in itself not be sustainable. In reflecting on the potential uses of AI for sustainability, we merely reflect on the ends AI could contribute to, without considering the means necessary to achieving such goals. This invites selective views on the potential of AI for sustainability goals as it does not require further considerations of the circumstances of creating these AI applications as well as the conflicts and opportunity costs of using AI for specific sustainability goals rather than others.

While AI for sustainability is a hotly debated issue in many areas of human activity, we also ought to concern ourselves with the sustainability of AI itself as a growing industry with its own potential and actual sustainability issues.

4 The sustainability of AI

AI technologies are becoming ubiquitous elements of infrastructure. Their development, production, and use require massive amounts of material resources, energy, and human labour. Thus, they give rise—not only through their computational power and potential but also through their material existence—to numerous ethical issues that are highly relevant from a sustainability perspective that attends to the entire socio-technical and environmental conditions and dynamics in the context of an AI application.

4.1 Environmental sustainability of AI

Doing things on the Internet requires electricity, regardless of whether it is done by human users or by AI-based applications. The electricity to power this digital infrastructure, however, is still largely provided by fossil-fuel-based energy supply. In consequence, there is a fundamental connection between environmental concerns on the one hand, and the uses of the digital infrastructure, especially by energy-intensive, AI-based processes like blockchains or LLMs, on the other hand.

Vinuesa et al.’s paper on the role of AI in achieving the SDGs tends to treat AI technologies as mere tools, failing to acknowledge the material costs tied to their existence and use [44]. First attempts have been undertaken to check whether these hopes are actually likely to come to fruition (see [38]), and awareness for the environmental and human costs of these systems is increasing, e.g., with regard to the energy needed for training neural networks [3, 42]. This especially becomes apparent when showing that AI-based technologies cannot only be used to develop strategies for sustainable cities, precision farming, smart grids, etc. (see above, chapter 3), but are in themselves challenges for sustainability [6, 45], or when a full “Atlas of AI” exposes the political and economic power-relationships, planetary dynamics, and costs of AI systems generally [10].

The consequences most researchers draw from these findings, however, are limited in their scope and only concerned with the sustainability of the means to achieve a functioning AI. For example, the suggestion to attach a label to algorithms indicating how much CO2 and computing power were used to created them (e.g. [3, 45] etc.) may help indicate how sustainable a corporation was (or attempted to be) in their means in creating an AI system, but remains silent on whether the pursued goals of the corporation and their AI systems deserve to be called sustainable overall.

At best, this limited view of sustainability incentivises corporations to reduce their carbon footprint and offset some of the negative environmental consequences of their production. At worst, this allows a market of ineffective or misleading “green AI” labels that do not contribute anything to reduce carbon emissions to keep the consumer in a false sense of effective environmental sustainability contribution, i.e., “green washing”.

In the following, we offer two additional considerations regarding the distribution of toxic waste tied to the production of AI-based technologies, and the role of AI-based efficiency gains in a growth-oriented economy. First, providing materials for manufacturing and using AI-based applications at scale requires not only enormous amounts of energy and natural resources, but also leads to enormous amounts of partly highly toxic waste, including once the devices become outdated or defunct [10], [33], [28]. Such material costs inevitably generate lasting environmental burdens—even if the production chain is becoming increasingly efficient, waste management will improve over time and the technologies can be run at higher rates of efficiency.

Second, similar to the concern regarding the rebound effect in the previous chapter, it is questionable whether the self-optimization of AI technologies is an effective strategy to reduce the environmental impact and ecological footprint. While AI applications enable the improvement of production processes in other industries, reducing costs and incentivising more resource and product consumption for the same price, they simultaneously enable producers to use these resource-efficient and “green” technologies to increase their output and thus retain the same carbon footprint while being more efficient. Any possible environmental gains connected to lower inputs in energy or resources will then ultimately be annihilated by the overall growth of increasing numbers of AI-using tools and of a further expanding economy. Increasing the sustainability of AI technologies can, at best, only have a retarding effect: the moment in which the planetary boundaries are being crossed will merely be reached a bit later.

When considering the sustainability of AI, the challenge is therefore much larger than an invitation “to remember that there are environmental costs to AI” [45]. A genuinely sustainable use of AI will require a combination of environmental concern on the one hand, with concern for socio-political and economic issues of justice and power relations in society on the other.

4.2 Social sustainability of AI

The conditions of production for AI in both an environmentally and socially sustainable way have gained attraction. Several researchers have pointed towards the socially unsustainable business models some AI companies are based on [7]. From the socially exploitative outsourcing of labour to low-income areas, to questionable data-sourcing strategies, political lobbying to remain legally independent (e.g., the promotion of AI ethics as a strategy to avoid legal action against big tech companies) to sexism and racism accusations within the workforce of AI companies [27], the development and use of AI is often not socially sustainable in the sense of contributing to human flourishing.

Many issues regarding “socially sustainable AI” have been recognized as a challenge to such AI companies. The first concern, however, lies with the transparency of how the data a company uses to train their neural networks is sourced. The power a data collecting company gains over those it collects data from, and the means with which a company is gaining access to these data, are further contributing to socially unsustainable developments. The concentration of power with those few “owning” the data of millions of people can lead to other socially disruptive and unsustainable developments. Transparency about the sourcing of data is a useful tool in prohibiting these developments, though the quality and scale of transparency required to effectively counter unsustainable business practices is controversial [15].

Then, the financial gains made from AI-based systems are distributed extremely unequally, as can be shown by comparing the annual financial gains of, say, Amazon’s CEO on the one extreme end and the underpaid labourers working under dangerous and exploitative conditions in the cobalt mines on the other extreme end [10, 11]. Such massive and often even increasing socio-economic inequalities cannot be considered socially sustainable, insofar as it allows some to accumulate further gains at the expense of others.

We should demand that companies build their business model on a socially sustainable basis, and should welcome any company that commits itself to improve transparency, inclusivity, fairness in wages and opportunities, and the impact of their activities on the social fabric they rely upon. However, in attempting to make the means of AI production more sustainable, one may again lose sight of the concerns of the ends to which such AI is used. If we, for example, merely concern ourselves with clean energy or a diverse workforce in AI companies to consider whether a manufacturer is a “sustainable” company without considering what ends this company is pursuing, the label “sustainable” may be misleading, because it is insufficiently ambitious. This way, “sustainability” may be used as a way to portray a company in a socially responsible, ethically laudable light while still not contributing to any more ambitious concept of sustainability. Bender et al. point towards this tension in their paper on “stochastic parrots” [3] in a striking way: not only do they question the social sustainability of large language models by pointing towards issues of how these language models came about, but they also question the ends at which their intended use is supposed to aim—from a sustainability perspective.

To sum up: Sætra [38] structures the debates surrounding how “sustainable AI” can be by referencing the UN’s SDGs and maps these on the current AI-production landscape, including the uneven distribution of access and influence on the production and subjection to adverse effects of such production (cf. also Crawford/Joler [11] for a detailed analysis of the life-cycle of Amazon’s Alexa, spelling out the complete, material picture from the required resource consumption to toxic waste). Some may object that these issues constitute nothing but the normal background which, though regrettable, is not directly relevant for a discussion of the ethics of AI technologies. On this account, focussing on the distinctive (sustainability) achievements and risks through the use of AI-based applications should be at the centre of the ethical assessment, and the environmental challenges should be ignored or put in a larger context of debate. However, it is a longstanding mistake, also made elsewhere in moral philosophy and applied ethics, to assume that an ethical assessment such as one focussing on sustainability could limit itself to what is done by some (developers, users, etc.) while ignoring the influence of presumably normal and thus acceptable background conditions, that, on closer reflection, may turn out to be morally unjustifiable and indicative of structural injustice (cf. e.g. [48], 120–121,[20, 29]). To counter such a limited ethical perspective, a sustainability lens will certainly direct attention to the social structures within which action and application of AI technologies takes place. In echoing the previous analytical frame of “AI for sustainability”, we can frame this debate in terms of means and ends: While the “AI for sustainability” discourse concerned itself with achieving certain sustainable ends without including the means of AI production, “sustainable AI” in this alternative sense is merely concerning itself with the sustainable means of AI production rather than the ends for which it is used. The outcome of adopting a broader focus of analysis, however, may very well lead to a shift in the assessment of a particular technology from sustainable to distinctively unsustainable.

As things currently stand, our global infrastructure that lies in the background of devices required to run AI-based applications is marked by massive, structural injustice: persistently and systemically, the advantages and disadvantages are unequally distributed between different groups of actors, as the analysis of the production chain, data extraction, disposal and the unequal distribution of power, influence and profits makes abundantly clear. The consequences and risks of developing and using AI thus have to include a critical assessment of the numerous dangers of unsustainability that are tied to the development, use, and disposal of AI technologies.

5 From thin to thick sustainability

The preceding chapters have shown different possible and possibly competing strategies to determine the sustainability of an AI-based technology. Before offering a way out of this situation by proposing a comprehensive account of “thick” sustainability for normative assessments of AI, we want to explain why such a proposed solution actually matters. AI-based technologies are clearly fascinating, probably figuring among the most sophisticated achievements of human ingenuity so far. Developing, producing, marketing, selling, and using them, however, are influenced by numerous factors external to the AI technology itself, but that nevertheless provide an important frame and generate impact for its use: in particular, the economic interests of promoting growth and making profits on the one hand, and the political interests of establishing a strong geo-political position on the other hand.

The first goal in the growth-oriented market economies is making and increasing profits. Growth is also said to be an economic necessity, providing the basis for flourishing societies and the welfare of its members.Footnote 6 Not pursuing growth, on this account, is then seen as an attempt to undermine business and basic social functioning. Innovation, i.e., the invention, production, and marketing of novel ideas and products, is an essential driver of economic growth. Clearly, the advances in computer science and artificial intelligence promise to lead to further innovation and economic growth, making markets excited about them. Yet, it is long known that within a confined system such as planet Earth, eternal growth in production—which presupposes continuously growing amounts of resources such as materials and energy and will produce also negative outputs such as waste—will eventually meet unsurmountable limitations [30], e.g., in the form of the much discussed “planetary boundaries” [37]. This existential problem is finally receiving increasing attention particularly in the looming climate crisis: perpetual economic growth, particularly exponential growth, will, if not moderated, eventually lead to systemic collapse.

The decisive question in the present context now is whether a growing market of AI technologies and their applications can indeed part of the solution, preventing environmental collapse or social disruptions while allowing for “green growth” [1], or whether even more sustainable AI products remain part of the problem and lead to further development towards collapse by the ever-growing sheer quantity of (ever more efficient and more sustainable) products. In this context, strong economic interest of some market participants exists to be able to label AI technologies “sustainable”, so that justified concern about the looming systemic collapse—to which growing use of AI is contributing—can be dispelled.Footnote 7

A second strategic interest in a presumably sustainable AI is (geo-) political: global competition exists also in political terms with regard to different political systems of governance and political participation. Correspondingly, different world regions are developing AI strategies aligned with their particular political preferences. The potential to assume a leading role with regard to sustainable AI allows praising oneself as a better alternative to unsustainable AI developers. As an illustration, sustainability—both in its environmental and its social form—is apparently valued higher in current public opinion and political agendas in North America and Europe than in China. China pushes AI-based technologies from the top political leadership to secure a global competitive edge without particular environmental concern; and it deploys sophisticated AI-based technologies also to survey and control its people’s behaviour without particular concern for individual preferences, political freedoms, or privacy. Presenting oneself as a proponent of—environmentally and socially—sustainable AI thus also comprises a political dimension and conveys a political statement; namely a commitment to some distinctive, positive values, at least from the perspective of the liberal countries of the global North, which allows for presenting oneself as superior in global comparison.

Here again, our earlier analysis has shown that such assessments of sustainability are complicated and require a more-nuanced analysis; presuming a simple black and white opposition between, say, China and the US, is insufficiently complex and cannot hold, in particular as infringements of privacy and other political risks associated with an increasing use of AI can be found anywhere. However, in the present context, we only meant to point out and highlight the existence and influence of strategic, political interests to be able to claim the sustainability label for oneself.

In sum: given the massive influence of economic and political interests, it becomes clear that labelling a technology as sustainable is attractive not solely because of its contribution to (environmental or social) sustainability per se, but also for external, namely economic and political reasons. Developers, companies, and also governments have a vested interest in being perceived as sustainable in their practice. The fundamental problems (that economic growth is, in spite of increases in efficiency, directly related to an increased use of resources and therefore can hardly qualify as sustainable, and that high social costs may be tied to an increased implementation of AI-based technologies in societies) are frequently ignored. In consequence, the label of sustainability is frequently used without sufficient basis as a means to “ethics-wash” an ultimately environmentally or socially harmful practice.

Green-washing now is the particular form of ethics-washing that focuses on and attempts to obscure the environmental impact of a given AI technology. Green-washing is widespread and also partly based on an insufficient understanding of sustainability among corporate leaders.Footnote 8 Other forms of ethics-washing may target the social impact of such technologies, e.g., when AI-based technologies are said to increase inclusion or equality in some way, while ultimately they perpetuate and even further cement an already pre-existing social or digital divide (cf. e.g. [13]).

Without having to deny the potential of AI-based applications to be sustainable in the different forms distinguished above, we urgently need a better, positive understanding of what should count as genuinely sustainable, i.e., conducive to improvements in human lives and living together in a way that does not negatively affect the quality of lives and living together of future generations [18]. To this end, we propose a comprehensive evaluation of any AI-based technology that attends to all four dimensions of sustainability explained above: the actual effect of using AI to advance environmental or social sustainability, i.e., an assessment according to the ends and outcomes, and the environmental and the social sustainability of the AI technology as a tool itself, i.e., an assessment according to the means. These distinctions can now be summarised in the form of the following matrix (Table 1).Footnote 9

Table 1 Dimensions of sustainability in the context of AI

Acknowledging that many AI-based technologies are indeed sustainable in one or even several of the four dimensions of sustainability in the context of AI, none of the four is in itself a sufficient condition for a comprehensive judgement of an AI as sustainable. Instead, all four—insofar as they are applicable—are necessary conditions to justify the overall verdict of being sustainable.

We can now map one of the examples mentioned above onto this matrix, easing the analysis of its sustainability contribution, and specifically pointing out its weaknesses and strengths with regard to a thick concept of sustainability. Take the construction of large language models (LLM) mentioned in the paper on stochastic parrots: Bender et al. problematize the high costs of such models for environmental sustainability, as the energy consumption and contribution to greenhouse gas emissions to train these models is considerable. Thus, its contribution to the means-dimension of green AI is negative. Further, it remains unclear from an ends-perspective just how the LLM is supposed to contribute to it, so it remains, at best, neutral. From a social sustainability perspective, then, training this particular model was problematic due to the pre-emptive selection of (presumably sexualized and racialized) language to avoid the LLM to make sexist, pornographic, or racist comments and thus be safely used in all kinds of conversational contexts. However, while the creators of the LLM would argue that this contributes to social sustainability and thus is a plus on the “ends” side, others have pointed out that this erases the language and thus representation of vibrant subcultures within this language model, leading to an unsustainable mainstreaming and erasure of such identities. Finally, we can question where all the data Google used to train this LLM came from. However, we do not have to question the negative effect Bender et al.’s paper had on the job prospects of Timnit Gebru and others [19]. This demonstrates that their employer, Google, still has a long way to go to achieve socially sustainable AI that is contributing to the means of AI as a socially sustainable technology. Overall, from our ambitious sustainability view, it becomes clear that the development, process, and use of this language model do not, in spite of some potentially positive impact on some dimensions of sustainability, pass the proposed four-dimensional sustainability test.

Thus, doubt about the possibility of AI to be truly sustainable is in order and one should be careful in using the label “sustainable AI”. Under current conditions of an economy oriented towards permanent growth, even “thick” sustainable AI is unlikely to be truly sustainable. After all, increases in efficiency through an AI that is built, run, and disposed of sustainably will be undermined, if ever larger amounts of such systems are created, rendering the entire concept of sustainable AI, under current conditions, self-defeating. Maybe, AI can unfold its beneficial impact on increasing sustainability only when paired with attempts to curtail resource use and economic growth across the board.

5.1 Thick sustainable AI: we are not there (yet?)

Our analysis of the social and environmental dimensions of AI-based technologies as tools and of the outcomes of using AI-based technologies supports our case for a more-nuanced and in consequence more sparing use of the label “sustainable” in the context of AI. Improving the sustainability of some AI-based technology in one or some dimensions, while neglecting the others, is simply insufficient to vindicate the overall assessment of sustainable AI. To capture this distinction—and to acknowledge that indeed some dimensions of sustainability can be met—we conclude this paper with proposing a terminological distinction between thin and thick accounts of sustainability: Only if all four dimensions are, if applicable, positively assessed, the verdict of “thick” sustainability is warranted; in all other cases, we can at best justify talk of “thin” sustainability.

One important question, and potential objection against our proposal, needs to be addressed right away, namely: Isn’t thin sustainability better than no sustainability, so that we should welcome it nevertheless? Or, in other words: Shouldn’t we welcome Pareto-improvements, i.e., improvements in one or some domains of sustainability without deterioration in any other; or does truly sustainable AI indeed have to satisfy sustainability expectations in each dimension? After all, why not welcome—and label as sustainable—any improvement in, say, energy efficiency, even if other sustainability dimensions are not affected? We propose the following answer: Sustainability is a powerful notion and should be kept as an ambitious goal, not be cheapened to accommodate, e.g., the economic or political interests outlined above at the expense of environmental and social concerns. Only AI-based technologies that are sustainable in all four dimensions deserve being called sustainable according to our thick understanding of the notion. All possible trade-offs between the different sustainability dimensions, all interim deviations and all small incremental improvements in only one or few dimensions that may be necessary to eventually reach full sustainability, will thus lead to disqualifying the technology under consideration from the quality of true or thick sustainability. After all, what is only increasing sustainability in one dimension while failing to be sustainable across the board—maybe even being less sustainable in another dimension—simply is not sustainable overall in the original sense of allowing for continued use while leaving enough for future generations. Calling something unsustainable sustainable is not only conceptually confused, but a distortion of reality that only serves the interest of ethics-washing.

We recommend such a strict understanding of thick sustainability even if, under current economic and political conditions, no AI-based technology should, as of yet, qualify as sustainable.Footnote 10 Talk about sustainability should be understood as an invitation and incentive to increase genuine efforts to secure thick sustainability, also and particularly in the development and use of AI. Such efforts might not yield success easily. Possibly, genuine sustainability will ultimately even remain out of reach when exclusively focussing on the technology itself, without implementing significant reforms in the wider social and political environment within which AI-based technologies are developed, produced, sold, and used. Background conditions that would, from the outset, provide social structures with reduced asymmetries of power and a fair and equitable distribution of opportunities for good lives and living together, would also increase the likelihood of developing and using AI in a genuinely sustainable sense to further advance human and environmental well-being. Given that AI is likely to increasingly influence and shape human lives in the future, AI ethics has to broaden its scope and also include considering issues of AI justice [21].

There are limitations of our analysis and argument. While we have identified important pitfalls in an uncritical application of the notion of “sustainability” to AI-based technologies, further research is necessary for detailed assessments of particular technologies that might be worthy of being called sustainable. The possibility of a genuinely sustainable AI has thus not been excluded. Yet, in our view, it remains generally questionable whether—in the absence of far reaching economic, political, and social reforms—this possibility can become a reality any time soon. The identified challenges and impediments include conceptual confusion as well as practical (political and engineering) challenges. Furthermore, in spite of existing proposals for addressing them, the challenges appear difficult to overcome, maybe even insurmountable; at least under the actual economic and political conditions that favour the pursuit of permanent economic growth. Quick talk of “sustainable AI” can, under these circumstances, be perceived as a premature and, as of yet, mostly unjustified attempt to transfer some of the positive connotations of sustainability and the SDGs to a novel technology with massive economic potential. Thus, for now, we must beware of sustainable AI.

6 Conclusion

The ethical debate about AI has recently turned towards the question whether and in which sense using AI can be sustainable, distinguishing possible contributions of AI to achieve the end of sustainability on the one hand from the sustainability of AI and its underlying technologies as means on the other. This important distinction is both applied in the context of environmental as well as social sustainability. However, we have seen that talk about “sustainable AI” is often insufficiently complex, and tends to selectively assess specific dimensions of sustainability alone, thus labelling a technology sustainable where such an assessment is not warranted. This increases the potential for green-washing in line with economic or political interests; showing a potential “dark side of sustainability” to beware of. Further elaboration is necessary to capture the complexities of sustainability assessments in the context of AI. To this end, we proposed a more fine-grained analysis of the ends and means of “sustainable AI” in social and environmental contexts leads to a matrix of four dimensions reflecting its social and its environmental impact and costs. While a selective assessment can, at best, warrant the narrower verdict of “thin” sustainability, only such a comprehensive assessment can warrant the verdict of what we call “thick” sustainability. In consequence, we recommend to broaden the normative scope in considering the ethics and justice of AI and to use the notion “sustainability” more carefully and sparingly and to pursue the more ambitious goal of “thick” sustainability of AI-based technologies to meaningfully contribute to actual improvements of human lives and living together. Current conditions of an economy oriented towards permanent growth, however, may generally undermine the possibility to realise sustainable AI.