1 Introduction

Currently there is much discussion about artificial intelligence (AI) and its potential risks and benefits to society. The version of AI that is usually talked about is machine learning, which enables statistical analysis of (big) data and thereby automation and prediction. AI is increasingly used across all sectors, and more recently applications of large language models (LLMs) have raised much public concern, for example about replacement of jobs and about manipulation.

In discussions about AI ethics and AI policy it is often said that AI should contribute to the common good. For example, the recently installed UN AI advisory body ‘aims to harness AI for the common good’Footnote 1, major AI companies such as Open AI claim that they develop AI for the common good,Footnote 2 and already in 2018 The House of Lords Artificial Intelligence Committee’s report said that AI should be developed for the common good and benefit of humanity.Footnote 3

It is easy to imagine that AI can contribute to the common good. Consider for instance medicine and healthcare: one could argue that AI-powered diagnosis tools benefit society at large, for instance when they enable early diagnosis of cancer, help research into factors that contribute to Alzheimer disease, and improve accessibility of care. This benefits particular societies and ultimately humanity. At the same time, some people may benefit more from AI than others. And when AI risks to make some workers and perhaps even professions obsolete, including knowledge workers and creative professions, the claim that AI contributes to the common good becomes at least more controversial.

But what, exactly, is the common good, how do we know it, and who defines it? What is the collective implied in “common”: a local community, a nation, humanity? And who is included in that collective? The concept has a long history in political philosophy, which is usually neglected in AI ethics discussions (an exception is Berendt [1]. Moreover, the question who defines the common good leads to the important question regarding the democratic character of AI governance; there are currently many worries about AI’s impact on democracy and the power asymmetries it creates [11].

In this paper I connect political-philosophical discussions about the common good to questions concerning the governance of AI. In particular, after sketching a conceptual framework based on relevant political philosophy literature concerning the common good (and outlining my position in that discussion), I discuss AI and the common good to more precisely identify the democratic deficit or gap in current AI governance. While it is often said that this governance could be more democratic, this paper helps to specify what, exactly, is problematic if we view it through the lens of the discussion about common good. It also indicates what we may agree on and what remains– and probably should remain– the subject of political discussion and contestation. Furthermore, in sympathy with the republican tradition in political philosophy (broadly construed), the paper points to the active role citizens can play in making sure that AI contributes to the common good. Going beyond Mouffe’s emphasis on giving voice and allowing political struggle, it calls attention to the creative and communicative aspects of active republican citizenship, and to the related need for civic education that prepares people accordingly.

2 The common good in political philosophy

The concept of the common good has a long history in political philosophy, going back to at least Plato and Aristotle. It is especially prominent in the so-called republican tradition of thinking about politics, according to which the state is not primarily there to safeguard individual liberties– as in modern individualist liberal thinking– but to promote the common good (and promote a conception of liberty that is linked to that common good). The common good, Aristotle argued, means the good life in and of the polis: a flourishing society enabling flourishing people, that is, people living the good life (eudaimonia). According to Aristotle, the polis was established for the common good; to use it for private benefit only,Footnote 4 as for instance tyrants do, is a corruption. Moreover, the common good was not only linked to the notion of the good life (flourishing) in the polis but also to democracy: power should not be arbitrary, there should be self-rule: political power is legitimized only when it is commonly accepted, when people govern themselves through a political process. This sounds familiar; we tend to understand this in terms of voting. But according to Aristotelian thinking, this self-rule is linked to the common good and has therefore two aspects. Citizens can vote, but are also supposed to contribute to that common good. As Offe [13] has noted, there is not only a passive but also an active side in the norm of the common good: citizens experience the common good as beneficial, but they are also supposed to do their part in bringing forth the common good. With a modern twist, one could say that it is a civic duty or civic obligation to contribute to the common good [14] and a civic virtue [9] to do so. Similar thinking about politics is to be found in other ancient political systems and cultures, for example in Confucianism [21], and republicanism further developed in the Renaissance and the Enlightenment (consider for example Rousseau). In 20th century political philosophy, republicanism can be found in Dewey’s work, which, like Rousseau’s, tries to connect the liberation of individuals and the promotion of common good [4]. The concept of the common good also played a role in communitarian criticisms of liberalism in the wake of Rawls’s work, and today republican thinkers such as Pettit [16]) refer to the notion. Pettit [17] argued that the common good is about the common interests people have as members of the public; the state should then track the common good and only interfere with citizens in a non-dominating way.

This takes us to contemporary political philosophy, where a key question is how to define the common good. So-called substantive theories hold that we can define the common good a priori, for example based on human nature or human needs. Procedural theories, by contrast, have questioned this; they have warned for people who impose their definition of the common good on others or exclude others who do not belong to their collective (for example in an authoritarian regime). They have also emphasized that in societies with a pluralism of doctrines and conceptions of what is of value in human life [18], definitions of the common good can always be contested and perhaps should always be contested. Even if one accepts the Aristotelian definition, for example, then it needs to be acknowledged that in a pluralistic society, ideas about what flourishing is and what the good life means differ. Therefore, even if everyone agreed on striving for the common good, what the common good means, exactly, and how it should be realized, needs to be discussed continuously and any particular definition and the corresponding laws should be open to contestation. This is also a requirement of the principle of freedom as self-rule. Pettit [15] speaks of ‘contestatory democracy’: to enable citizens to contest laws and the decisions of their representatives is a way to condition people’s freedom without compromising or violating it, since they can edit the laws that apply to them. This view, which is in line with Rousseau’s thought that it is acceptable to subject ourselves to what he called ‘the general will’ as long as there is self-rule [19], could also be applied to the common good. It is fine to be governed in the name of the common good as long as citizens can participated in defining what it is and as long as there is the possibility of democratic contestation, open to anyone in society. In Mouffe’s [10] view, this can and should have an antagonistic dimension: the common good will always be part of political struggle. While her emphasis on struggle is controversial, it is interesting here that the worry is not only absence of freedom (and the corresponding rise of tyranny) but also that the political itself gets destroyed by means of an authoritarian understanding of the common good. This happens when technocrats or autocrats pretend that there is only one truth about the common good, that there is no pluralism or should be no pluralism, and that there is no need for consensus-making, deliberation, or political struggle. How to avoid this technocratic or autocratic turn? The solution and direction proceduralists propose, are procedures– democratic ones– that ensure inclusive and public deliberation about the common good and guarantee the possibility of contestation. The advantage of a procedural approach to democracy and the common good is that no prior agreement on the common good is needed; this is the outcome of the procedure(s).

It is questionable, however, if procedures alone are sufficient to arrive at a shared notion of the common good and if some a priori consensus on what counts as common good in general is not needed to give a normative orientation to the discussion and, ultimately, to the collective. Here it helps to distinguish between general definitions of common good and more specific questions regarding their application. While it might not be possible to agree on what exactly the common good means and what it implies for, say, the governance of AI, it seems possible and feasible to agree on a number of specific goods as common goods, at least on a general level. For example, we could agree that we want a clean environment, security, a fair society, and so on. We could then further deliberate– democratically– about what these common goods mean and how to implement them. This is not only a practical necessity, given disagreement and given the need to govern. According to the republican tradition, this deliberation (and the participation in this deliberation) is a value in itself, and not merely instrumental. It also helps to constitute the polis and its people as citizens. It ensures citizens’ freedom, is part of persons’ moral and political self-development, and is even a citizens’ duty.

Moreover, the collective relevant to the “common” need not be defined in an exclusive way in at least the following two senses. First, common good does and should benefit people on the basis of their humanity, not only their citizenship. Consider for instance migrants or asylum seekers who do not (yet) have citizenship status, but who are and should be included in the collective on the basis of their humanity (even if we may disagree what this means in practice). Second, we do not have to choose between the common good of a particular nation or community, on the one hand, and that of humanity on the other hand. Common good can refer to collectives at various levels, from community to humanity. We need a multi-level understanding of the collective that is implied in the term common good. Again, that doesn’t mean we all agree on what promoting the common good means in practice at a particular level and what it requires, for instance at global level. It also doesn’t mean that we are relieved of having to discuss potential tensions and tradeoffs between different common goods linked to different levels or different collectives. But we can agree that we should strive for promoting common good at all levels.

In addition, it is also important to specify the temporal dimension of the concept [13]. Do we think only of the common good of the current generation, or also of the next generations? And if so, how far should that time horizon be put? Should we discuss the future of humanity in the light of the distant future, as some participants in the AI Ethics debate propose, or should we limit our scope to the next generations? This, too, is a political question. AI governance will look different according to how we define the temporal scope of the common good.

Finally, an interesting aspect of the ancient notion of common good that is often left out in modern individualist discussions of democracy is the already mentioned active aspect: citizens are not only beneficiaries of the common good (and bearers of individual rights) but also have a duty to contribute to the common good. This is different from much popular contemporary political theory, which is always ready to demand that voices of citizens need to be heard (see again Mouffe) and emphasizes that their individual rights need to be respected, but which asks not much in return from citizens. From a republican point of view, the requirement to contribute to the common good can and should be added to a richer democratic ideal in which the common good plays a role. (And note that stressing a normative orientation to the common good does not entail that the notion of rights is necessarily abandoned. However, for the purpose of this paper I will not further address this complex issue regarding the relation between good and right.)

3 AI and the common good

Let us now apply this conceptual framework to the discussion about AI. Often the common good is referenced in a way that opposes it to private good. People who use the term often mean that AI should not (only) be used for private profit, but should contribute to the good of society. For example, Schneier and others have voiced the opinion that ‘A.I. could advance the public good, not private profit.’Footnote 5 But as we have seen, the term “common good” is much richer in meaning and can also refer more directly to what we have in common– for instance when an argument about data commons or digital commons is made (see for example Fuchs [8]– or to various ethical and political principles that are related to common good and to similar notions such as public interest and public sphere. For example, the mentioned UNESCO document contains some general principles and values that give more substance to the general norm that AI should contribute to the common good.

But does that mean there is only agreement about the common good? While most discussants in the AI ethics debate tend to agree that AI should contribute to the common good, they have different views on what that implies.

First, even if most discussants agree that AI should also be used for the common good, it is up to further discussion whether this means that private profit making should be regulated, and if so, how, and what this means for the public/private status of data (whether it is ok to use publicly available data for making private profit, for example). Here the AI ethics debate becomes, or at least should become, subject to the rather classical debate about the role of the state, a highly political question. In the US and UK political culture, for instance, the call to develop AI in a way that contributes to the common good is often understood as in terms of self-regulation and philanthropy. This tends to be the view of major tech firms, and has the support of some academics. For example, O’Keefe and his Future of Humanity Institute colleagues [12 have argued that AI firms should have to possibility to commit– ex ante– to donate a significant amount of their profits to benefit humanity. In the EU, by contrast, the demand to contribute to the common good tends to be understood in terms of regulation (and sometimes taxation), for example regulation by means of the AI Act, and in countries such as China the state takes an even larger role in regulating AI, data, and tech firms. One could also discuss whether AI itself should be understood as a commons (some would say public good), or if it should rather be treated as any other commercial product regulated by the invisible hand of the free market. Again, views on this matter are likely to vary across political cultures and publics. All of these views and approaches can and should be critically discussed; such questions regarding the role of the state and regulation are political questions and there is nothing natural or self-evident about the answers, let alone that they can and should be answered in a purely technocratic way.

Second, when it comes to general principles and values, there seems to be sufficient consensus on what the common good is, or at least on which normative notions are instrumental for it. There are plenty of AI policy documents that list normative notions such as human rights, freedom, equity, justice, transparency, accountability, sustainability, etc., which can be defined as constitutive of the common good or at least contributing to it. Consider for instance the UNESCO [20] Recommendation on the Ethics of Artificial Intelligence, which lists values and principles such as respect for human rights and freedoms, environment and ecosystem flourishing, diversity and inclusiveness, fairness and non-discrimination, and sustainability– all of which can be seen as constitutive of common good, in this case common good at global level.

What is less clear, however– and this is again exactly where there is a political moment– is what these general notions mean, to whom they should apply (to which collectives, geographically and in in time, e.g. to which generations) and how to implement them in practice, i.e. what kind of governance follows from it. Here democratic deliberation and participation can and should play a role. What these goods mean (and for whom), and who should decide about that meaning, can and should be contested and discussed, and this discussion should be inclusive and democratic. For the most powerful actors in the AI world it might be tempting to go for technocratic solutions that skip this step, and pretend that we all know what the common good means and that there can be an uncontroversial technical “objective” implementation of these principles and goods to a collective (and time horizon) that is not called in question. Both AI companies and governments might fall into this temptation. They might even see AI itself as a technology that makes democracy obsolete [3]. In order to avoid this authoritarian-arbitrary or too technocratic direction, we need good democratic procedures, arguably supported by notions of right (and not just good– but as said this is a topic on its own, which I will not treat here) and with input from experts. For example, constitutional rights and procedures can and should support democratic institutions in order to avoid autocratic and arbitrary exercise of power. This does not mean that expertise should play no role at all in AI governance, but rather that its role should be better embedded in, and properly limited by, a democratic framework. Such a framework should include the possibility for citizens to participate in decisions about AI and to contest the decisions of their representatives or the executive powers.

This democratic participation and deliberation about AI can be organized by means of the usual representative legislative processes (through parliaments) and public discussions in the media, but given the limitations of these democratic institutions (tyranny of the majority, turning citizens into passive consumers alienated from politics, etc.), we may also need to think of new institutions that enable citizen participation in decisions about AI and experiment with new deliberative procedures. For example, it has been argued that Citizens’ Assemblies can and should be used to shape AI governance in a more inclusive, democratic, and deliberative wayFootnote 6, and there have been experiments with deliberative polling, which combines polling with discussions and dialogues with experts and politiciansFootnote 7; this could also be used in and for AI governance.

Applying the discussion about the common good to questions regarding AI thus helps us to identify a democratic deficit or democracy gap in current AI governance. There is currently too little democratic discussion about what the common good means in practice and what needs to be done to achieve it; policy makers and multinational corporations hurry from the vague notion of the “common good” and related general principles and goods to very specific legal measures and technical operations, without a transparent and democratic process that should sit in between.

Some of this can be seen in the current process towards the EU’s AI Act, which moved from principles to concrete legal texts without a broad and transparent discussion of for instance what common goods such as security mean (and how it should be balanced with freedom and other goods) and with little or no participation of citizens. For example, recently citizens were left out of crucial negotiations about the EU’s AI Act: the so-called trilogue negotiations between Council and European Parliament in December 2023,Footnote 8 for instance, which were crucial in the realization of the bloc’s AI regulation, were conducted by a small number of representatives, with no possibility for citizens to contest the resulting norms, let alone participate. There was also no open and participative political discussion about the underlying fundamental question how far regulation by states and supranational entities should go when it comes to governance of AI and other digital technologies. And the major corporate players in the AI world have all professed their loyalty to the common good of humanity, but at the same time endorse and implement a very specific interpretation of that notion in their technical operations without democratic input and deliberation. Consider for instance the ways ChatGPT and other large language models are developed and governed: in spite of the technology being widely used, tech CEOs and the Boards they have to answer to decide what the common good means for them, not citizens.

Finally, these common approaches to AI governance do not only disrespect democracy in the sense that they do not allow citizens to participate in decisions regarding the common good (and the role of AI vis-à-vis that common good), but the current approaches and practices also disrespect the active role citizens could have in contributing to the common good with AI. At best (that is, if democracy is not ignored entirely), it sees citizens too much as a kind of democracy consumers and common good consumers, who need to receive whatever is due to them (rights, part of the common good) via state governance or via corporate self-governance. It does not ask from them an active participation in making common good, for example with the help of AI. From a republican point of view, however, such an active participation would be required: not only via voting and deliberation, but also via the daily practices of citizens. What AI is and does is not only a matter of development and design; it also takes shape in use. AI is also how we use it. Seen from this perspective, the question is then not only: how should governments and corporations respect my rights and how can they hear my voice and let me benefit from the common good created by them, but also how can I and we (say, communities) contribute to the common good by using AI and other technologies, and by deliberating about their use. To do so would be a civic obligation and a civic virtue. Hacker communities, for instance, could play an exemplary role when it comes to using AI tools differently in social media contexts. More generally, the role of civil society will be crucial in shaping a politics of technology that does not only focuses on discourse but also tinkers with the technology and applies it in creative and novel ways in the service of the common good.

This does not mean that expertise is obsolete or that all should be left to citizens. For example, if one considers the idea of citizen participation in thinking about how to use AI-based medical diagnosis tools such as pattern recognition in computer tomography for the common good, then clearly input from medical and technical experts is needed. But a true democratization of AI would no longer tolerate the complete exclusion of citizens from the governance and development of these practices and technologies, as is now often the case.

Such a more active and republican role for citizens, however, should not only be supported by external expertise but also presupposes knowledge and skills on the part of citizens and hence requires a better education for all. It requires what one can call a “civic” education: education in and for democracy and aimed at the flourishing of the polis and its citizens– indeed aimed at the common good. This would entail the training of critical skills and deliberative skills. In contrast to the classical humanist education, however, this education would not only enable a more critical relation to texts but also develop a better knowledge of (other) technologies and a better awareness of the ethical and political aspects of these technologies. For example, it would include fostering people’s critical skills with regard to technologies and media, including skills that help citizens to act in the new information environments created by AI, social media, and other digital technologies, and skills to use AI in an empowering way. One would also need to discuss already at schools (and not only later in parliaments and other democratic institutions and procedures) how technologies such as AI could contribute to the common good. Creating the conditions for such an education in turn requires more investment in research about these matters.

This also requires a research and education policy that finally drops the modern liberal assumption that the state and its educators can and should be entirely neutral with regard to the common good or that educators should not speak about politics, democracy, and civic virtue. On the contrary, we need an education and an education policy that actively promotes the common good and discussion about the common good– without, of course, taking an authoritarian approach to defining the common good. Instead, it requires a way of thinking about education that links it directly to common good and democracy and that takes an active pluralistic approach. Such a democratic angel and emphasis on pluralism would be in line with both Dewey’s view of democracy and education [5] and with contemporary calls for a pluralist approach in global information ethics [7]. It would implement Dewey’s vision that democratic education teaches people to creatively contribute to society, and that pluralism does not mean passive tolerance (as in classical liberalism) but instead requires engaging with and perhaps even learning from each other, also with and from those we disagree with [6]. This emphasis on creative contributions to the common good could guide AI use in schools; children in schools would learn to listen to and discuss different views on the common good and the role of technology vis-à-vis the common good.

If, instead, we stay with the current system, we will continue to produce a small technocratic elite that rules a mass of angry citizens who rightly complain that they are not heard but fail to see that and why they, too, should contribute to the common good, and who refuse to listen to those they disagree with, i.e. refuse to really discuss and deliberate. Neither classical liberalist rights nor Mouffe’s political struggle are enough to build a sufficient political basis for the common good– at least from a republican point of view. Instead of focusing only on the expression of voice, on rights, and on struggle (in politics, in education, and elsewhere), we need to take seriously the creative and constructive communicative dimension of citizenship and promote it vigorously in education and other contexts. In line with Dewey, we must understand the common good as the result of a creative and communicative process. For AI governance, this means that citizens are not only heard but are also respected as potential active contributors to the common good: through their participation in democratic deliberation, but also through their development and use of AI in their everyday lives.

4 Conclusion: a republican view of AI for the common good

AI for democracy and AI for the common good, then, requires a recognition of the intrinsic political dimension of AI [2] and of the common good: while we may agree that AI needs to promote the common good and agree on what the common good means in general (for example we may adopt the Aristotelian, republican notion that the common good is about the flourishing and good life of the polis and its citizens and not just the pursuit of private interests), it is less clear what promoting and realizing it means in (AI governance) practice. Therefore, in a democracy it is crucial that we discuss this in an inclusive, participatory, and deliberative way. If we fail to do this, a democratic deficit remains and governments and big corporations will (continue to) take the technocratic shortcut. Moreover, making and doing AI should already be democratized before it is governed by states or corporations and before we vote. If the republican tradition is right, then making sure that AI contributes to the common good is also a matter of doing our civic duty and exercising civic virtue in concrete AI practices: as software developers, for instance, but also as (end-)users of AI and in education at all levels and all stages of life. If the republican approach to democracy and common good is right, then whether and how much AI contributes to the common good does not only depend on governance by “them” (nation states, international organisations, multinational corporations, unions, consumer organisations, etc.) but also on (all of) us as citizens: on our creativity as tech developers and tech users and on our personal willingness and democratic commitment to publicly engage with others about how AI can contribute to more flourishing and more common good.