Skip to main content

AI for the public. How public interest theory shifts the discourse on AI


AI for social good is a thriving research topic and a frequently declared goal of AI strategies and regulation. This article investigates the requirements necessary in order for AI to actually serve a public interest, and hence be socially good. The authors propose shifting the focus of the discourse towards democratic governance processes when developing and deploying AI systems. The article draws from the rich history of public interest theory in political philosophy and law, and develops a framework for ‘public interest AI’. The framework consists of (1) public justification for the AI system, (2) an emphasis on equality, (3) deliberation/ co-design process, (4) technical safeguards, and (5) openness to validation. This framework is then applied to two case studies, namely SyRI, the Dutch welfare fraud detection project, and UNICEF’s Project Connect, that maps schools worldwide. Through the analysis of these cases, the authors conclude that public interest is a helpful and practical guide for the development and governance of AI for the people.


Artificial Intelligent (AI) systems that aim to serve the public or the social good are an important research topic, and an explicit objective in numerous national and international AI strategies and regulatory proposals (e.g., the European Commission’s (2021)). But despite the strong and widely articulated belief that AI has the potential to “address problems at the societal level and improve the well-being of the society” (Shi et al. 2020), there appear to be more examples of AI systems that harm society than those meeting this potential (O’Neil 2017; Pasquale 2020). The recognition of these harms has led to the rise of numerous guidelines for ethical AI. But while there is a growing consensus on the general ethical AI principles (such as accuracy and robustness, explainability and transparency, bias and fairness, privacy, accountability, safety and securityFootnote 1 (see AI for people), major gaps exist with regards to implementation and governance strategies (Jobin et al. 2019; Mittelstadt 2019). That is why despite the value of these guidelines, they appear unable to safeguard the impact of AI in social contexts (and hence the call for additional AI regulation).

In this paper, we build on these calls, and argue that (aside from regulation) the realization that AI meeting the common good needs a political and democratic governance process with the public interest principles at its core. In other words, we propose shifting the focus from the values to the design of a governance process. We believe this shift is necessary, since developing and deploying public interest AI systems raises the question of democratic legitimacy as well as how it influences power relations (Kalluri 2020). Such questions go beyond the scope of the moral intentions of their makers and relate to the public at large. This approach offers the advantage of building on existing democratic theories and structures, with their rich history and institutional practices as a guide.

The public interest has been a ‘go to concept’ in political and legal theory, as well as in democratic policy making and decision-making for centuries (Bozeman 2007, p.86). It has proven to be both a robust and flexible concept, and applicable to new cases (Feintuck 2004). Not by chance, a discourse around public interest technology has emerged over recent years (e.g. Meyer 2019; Schneier 2019). However, the scholarship has not expanded on the connection with the rich and insightful underlying theories of public interest. We believe that revisiting and expanding this link is worth the effort given the insightful answers it can produce.

In this paper, we investigate this history of ‘public interest’ in political and legal theory to identify some commonly agreed-upon principles and procedures. We then apply these principles and procedures to the development and deployment of AI systems and highlight their practical use and inspiration for the design of democratic governance processes that ensure the public interest.

The rest of this paper is organized as follows. In Sect. 2, we review the theory of public interest. In Sect. 3, we apply the general principles of this theory and develop our framework for public interest AI systems. In Sect. 4 we look at two illustrative cases and examine them in our framework. Finally, we end with a discussion and conclusion in Sect. 4.

The public interest in political theory

The term public interest, like many terms in political theory, lacks a universal definition and is contested. Its roots can be traced as far back as Plato and Aristotle. The notion has experienced a renaissance in recent political debates (Offe 2012) and an “active incarnation in public policy and management” (Bozeman 2007, p.11) in the last decades.

In the simplest understanding, the concept of ‘public’ interest often appears to be the other side of ‘individual’, ‘private’ or ‘group’ interests. It often relates to goals and virtues other than profit and market activity, such as the happiness and wellbeing of a community. Some have defined the public interest as the goal of good actions in a public (von der Pfordten 2008). An interest is more than a mere want or demand: it includes an appeal to some ‘justificatory scheme’; a claim that the interest is justifiable by (or on behalf of) individuals or the public (Held 1970, p.31). The ‘public’ gives reference to a specific community which is the body of people concerned even though this might be a very big group, such as the European public.

Different accounts of the public interest in political theory

The biggest controversies that surround the public interest in political theory relate to the question of how to determine what is in the public interest. In her book “the public interest and individual interests”, feminist scholar Virgina Held (1970) investigates this relationship, and, in particular, whether it can be satisfactorily explained using a simple mathematical formulation, such as preponderance or commonality, or some universally agreed unitary or normative conception. After discussing all these approaches, she finally proposes that the public interest needs to be resolved on a case-by-case basis, by judging and balancing the justifications given for the interests. She thus concludes that "for judgements of public interest to be valid, we must presuppose a method of deciding between […] rival claims. Hence, the meaningful use of the term 'public interest' presupposes the existence of a political system, however primitive or complex" (Held 1970, p.168).

Barry Bozeman (2007) takes a similar approach in his later book, “Public Values and Public Interest''. He depicts different accounts of the public interest, as ‘normative’, ‘consensualist’, and the “process” approaches (Bozeman 2007, p.89, with reference to Cochran (1974)). The normative approach sees the public interest as the crucial goal which public officials should pursue. In this approach, the public interest is the ethical standard for evaluating public policies. It assumes that there is a public interest apart from aggregated private interests, as for instance Kennedy (1959) argues in her work on the process of evaluation in a democratic community. The consensualist approach sees the public interest as reflected in the interests of the majority, and regards voting as a useful means to determine what is in the public interest in democracies (see Downs 1962). Lastly, Bozeman (2007, p.93) describes three varieties of the “process” theories of the public interest: “an aggregative conception, that follows the utilitarian idea the public interest is the greatest good for the greatest number of citizens”; a view of the public interest as “the competition among interests (the ‘pluralist’ conception)”; and a view of the public interest as “interest reconciliation and fair procedure” referred to as the ‘procedural’ conception, which for instance Benn and Peters (1959) represent.

For his own account, Bozeman combines a proceduralist and an ideal normative approach, since he is convinced that this can provide practical utility to public policy and management. He provides a working definition for the public interest as referring to “the outcomes best serving the long-run survival and well-being of a social collective construed as a ‘public’” (Bozeman 2007, p.12). Like Held, he concludes that the public interest can never be determined universally but is situation dependent and dynamic: “What is ‘in the public interest’ changes not only case-by-case but also within the same case as time advances and conditions change” (Bozeman 2007, p.13).

The importance of deliberation and social learning in determining the public interest

Bozeman draws on the pragmatist philosopher and democratic theorist John Dewey, who he believed to have found an alternative and very productive account regarding the public interest. Bozeman argues with reference to Dewey that the pursuit of the public interest “is a matter of using open minds and sound, fair procedures to move ever closer to the ideal that is revealed during the process and, in part, by the process” (Bozeman 2007, p.101). In his work “The Public and its Problems”, Dewey (1927) outlined the democratic public as a space for democratic experimentalism (p.220, see also Sable 2012). As Bozeman (2007, p.104) elaborates, Dewey’s work adds two critical elements to public interest theory: “a method of democratic social inquiry modeled after the ideal workings of the scientific community, and a focus on the key role of deliberation, social learning, and interest transformation in this process. His philosophy offers an approach to reconciling the need to preserve public valueFootnote 2 ideals and to enable the practical application. The Dewey approach focuses on public values but not a monolithic concept of public interest; rather it focuses on a public interest in action.”

Dewey spoke of a common awareness of shared interests that communities needed to establish. A consciousness about these needs for him is to be formed and solidified in an experimental social inquiry into existing public problems and conflicts, and through a process of open debate (Dewey 1935, as referenced by Bozeman 2007). His requirements for this type of debate seem as simple as they are hard to meet: individuals should enter public deliberations open mindedly, willing to listen to each other, and considering the possibility that their own views (and potentially their preferences and interests) may be misinformed. Dewey believed that in the process of reasoned and respectful argument among fellow citizens, individual positions can be negotiated for the shared goal of finding a public interest (Dewey 1927, p.224).

Dewey thereby recognizes conflicts of different ‘publics’, but does not simply look for the utilitarian goal of preference aggregation, nor does he stop with the pluralist assumption of inevitable conflicting group struggles. He believed that citizens were capable of harmonizing the interests of separate persons and groups in a process of deliberation by empathizing with one another, listening to others' experiences and opposing arguments and actually learning from each other. He saw cooperation despite all differences as a priceless addition to life and believed in human capabilities to solve conflicts in dispute and collaboration. He was aware that so far no society has fully realized this potential, but nevertheless he believed in the educability of citizens and their abilities, as much as that the “institutionalization of the scientific spirit in education and public life would foster the kind of democratic diffusion of knowledge of social consequences that would allow citizens to chart their own political and policy course” (Bozeman 2007, p.107).

Despite all his optimism, Dewey recognized the possible fallibility of the public, which could be simply mistaken, misled by ideological persuasion, false information, kept in the dark by political secrecy or be corrupted by economic forces. But he saw a deep strength in democracies by having a social intelligence which he believed would ultimately be self-correcting, as long as its open-minded and transparent character prevails. This point is crucial to take into account for our later discussion: we believe it is this transparent character, a participatory process of deliberation and openness for validation that turns out to be key elements in a democratic design process for public interest AI.

The public interest in legal scholarship

Perhaps unsurprisingly, the concept of the public interest has had an important and persistent presence in legal discourse, despite the lack of a universally clear or precise definition. Feintuck (2004), who examines the history of the public interest in regulation, describes the concept as having the function of a practical link, or translation, between higher democratic and constitutional values, and the practical world of law and regulation.

Feintuck starts by following Held's conceptualization of the public interest and agrees with the notion that it has to be rooted in common values and norms: a policy “cannot be in the public interest if it conflicts with the elements of the minimal value structures that define the society” (Held 1970, p.222). Feintuck then searches for what these common values could be and argues that “the public interest should at any time serve as a counterbalance to the power of dominant interest groups in society” (p.27; emphasis ours).

Feintuck then journey’s through the use of the public interest by British and American courts and policymakers (over several centuries). In seventeenth-century England, the concept became associated with the protection of private property rights against potential incursions by Parliament or the Crown (Gunn 1969, as cited by Feintuck 2004). By the twentieth century, the US Federal Communications Commission used the concept in close relation with antitrust and opposing dominant economic interests (McFadden, 1997, p.464). The concept has also manifested itself—in public interest litigation cases—regarding the standing to bring forward a judicial review or environmental law case by NGOs such as Greenpeace (Feintuck 2004, p.50). Looking for the commonality among the different uses of the public interest, Feintuck places it within the ‘civic republicanist’ thought in which citizens form a democratic political community that is larger than individual interests (p.247). In such a democratic community, the expectation is that legal and regulatory instruments and practices serve the end of ‘equality of citizenship’. In other words, he identifies equality as the key normative principle behind the public interest in law and suggests it is also its core purpose in the context of regulation.Footnote 3

This is why Feintuck (2004, p.248) furthermore understands the public interest to be an ‘interpretive principle’ that “reinforces the democratic fabric of the policy”, and in this regard draws parallels between the concepts of public interest and the “Rule of Law”. In similar lines, Gordon (2013) argues that allowing NGOs to bring forward lawsuits, emphasises the public interest in vindicating the rule of law and its requirements, such as acting within powers, due process, reason-giving, proportionality and fundamental rights. Importantly though, Feintuck argues (p.251) that concept should not be reduced to simple ‘procedural requirements’: “if any concept is to serve effectively as an interpretative principle it must overtly incorporate, and explicitly refer to, the democratic values [i.e., equality] it is intended to serve”.

But how can we interpret ‘equality of citizenship’ in a global context—given that AI systems can affect non-citizens as well? We suggest the solution is to extend the concept of equality that underlies the public interest of all humans. This extension is precisely what the Universal Declaration on Human Rights (UDHR) aspires to do. Article 1 of the Declaration states: “All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.” (United Nations 1948) All United Nations Member States have ratified at least one of the treaties inspired by the UDHR, and eighty percent have ratified four or more, giving concrete expression to its universality.Footnote 4

Criticism towards public interest and theoretical conclusion

As mentioned before, the concept of the public interest is contested. The main criticism roots back to the early twentieth century (see Bentley 1908) and circles around the point of ambiguity and vagueness. Additionally, a number of scholars have questioned if the concept is helpful and applicable in political debate and practice (see Schubert 1960; Sorauf 1957; Downs 1962).

We agree with the counter-argument by Flathman (1966), and others, that difficulty of analysis and ambiguity are weak reasons to discard a concept, particularly in political or legal theory. Instead, a certain ambiguity allows different societies and generations to redefine the public interest and therefore makes the concept even more stable. The ideal of public interest has succeeded to capture collective imaginaries over centuries and therefore constantly motivated administrators and other stakeholders in society to strive for this ideal—even though it may never be fully achieved. To argue that if unfulfilled, the public interest ideal is useless, is to argue that any ideal in politics would be useless, including justice or democracy which are all never fully measurable and achieved but, as the public interest, remain a moving target. Nevertheless, the concept of the public interest has proven to be tangible enough to motivate and anchor legal and political discourse and decision-making over several centuries.

We can summarize our theoretical outline of the concept of public interest in the political and legal theory as follows:

Firstly, arguing with Held and Bozeman, for something to serve the public interest, there must be a normative democratic justification which is accepted among citizens of a social collective construed as a public. This justification may be anchored in a collective institution (like a constitution, laws or commonly shared public opinions) and needs to point to a distinctive difference from serving private interests.

Secondly, we take from Feintuck the observation that equality can be defined as the key principle for the public interest in law. This means in practice, that not hurting (at least) and serving equality and human rights amongst its citizens (at best) must be the most substantial normative goal of anything that aims to be in the public interest.

Thirdly, and in reference to Bozeman and Dewey, to determine what is in the public interest for a specific public at a specific time, a social collective needs to go through a process of deliberation following the ideal of a scientific spirit. This means that citizens bring their different interests with an open mind to the realm of the public, to be discussed and possibly co-designed under the scrutiny of the collective.

AI in the public interest

With this pragmatic-idealist notion of the public interest in mind, we now want to examine the process of AI development and deployment. Our goal here is to formulate requirements which need to be met to speak of an AI system as serving the public interest. To do so systematically, we will raise and answer key questions in connection to our theoretical outline which are namely (1) a public (not profit-oriented) justification, (2) serving equality, (3) deliberation/ co-design process, (4) following key technical standards (5) and openness for validation.

Any public interest AI system needs a public (not profit-oriented) justification

For AI to actually serve the public interest, we believe that first of all a justification to the public is necessary, to argue why the technology is not developed for the mere sake of innovation or commercial benefits but to serve a common public interest. The entity considering an AI system to be part of a solution in a social, policy or other public context, needs to present an argument to fellow citizens, giving the reasons how the system will tackle and improve the given issue and why it's the best solution in consideration of the alternatives. The reasons given need to be based on the democratic argument of the public concerned (which might be rights formulated in a constitution or other laws or other socially agreed on common goals). In practice, the question of whether using AI is for a given case the best solution is often tricky, and one that cannot be fully answered in advance. Nevertheless, a preliminary discussion and best-effort answer are necessary to ensure the use of AI in general and the spending of resources, in particular, are justified. Many functions in society simply have no public justification to ever be automated and many issues cannot be solved by technical means of optimizations.

Another important consideration connected to this justification is that it has to be a public interest justification, which (as mentioned in the theory section) differs from private and purely economic interests. It should be noted that some scholars from within the economic discipline might contest this point and state that profitability contributes to the public interest (e.g., see Meynhardt 2019). Most public interest scholars, especially those coming from a background of law or philosophy, however, have argued quite strongly against equating the public interest with the pursuit of private economic interests (e.g., see Feintuck 2004). There are numerous well-established examples of market failures within the economic discipline, where individual commercial interests and the public interest at large diverge, such as monopoly pricing, the building of public goods such as roads and schools, and tackling environmental pollution. As philosopher von der Pfordten (2008) states, the liberal-economic imagination, while still relevant, has been disproved from the perspective of psychology, where numerous empirical studies have shown that the ‘cold rational man’ doesn’t exist in theory and practice.

Following this distinction, this implies for public interest AI that many of the existing AI projects—even if they are proclaimed to serve ‘the common or social good’—are out of scope to serve the public interests, if their objectives are primarily profit-oriented.Footnote 5

Public interest AI should serve equality and human rights

As we have so far argued, an AI system that is in the public interest needs to articulate a public and socially aware justification for its development and deployment. We can go one step further, taking into account the legal discourse, and state that such an AI system should serve equality and human rights (and at a minimum not undermine it).

Equality is related to the commonly discussed ethical AI principle of fairness, and the related goals of reducing bias in datasets and algorithms (e.g., AI HLEG 2019; Leslie 2019; Floridi et al. 2020; AI for People 2021). However, what we conceive goes deeper than making sure a particular sub-system does not discriminate in outcomes (between races, genders, and other societal groups). The more fundamental question should also be asked of whether an AI system should even exist in the first place, in particular when considering how it influences power relations in society. It is important for the public interest to avoid outcomes that—despite presenting a technically working solution—go against justice or shift power in an unwanted direction.

This deeper understanding of equality than bias is important as it touches on the criticism of ‘big tech’ reducing 'Responsible AI' to only fairness and bias (Hao 2021). For instance, let’s assume for the sake of this example that Facebook would argue to act in the public interest with its mission to “to give people the power to build community and bring the world closer together” (Facebook 2021). Facebook’s newsfeed algorithm results in misinformation and manipulative advertisements but being shown to different groups in equal amounts, it is technically not biased (while the situation remains undesirable). If we instead ask whether this subsystem enhances equality as a whole in society (that is among user groups and also advertisers and the platform itself), then we would reach the conclusion that the newsfeed algorithm (as designed and deployed by just Facebook) does not serve equality.Footnote 6

The inequalities caused by AI systems also apply to persons with disabilities. Keyes (2020) illustrate in their research how, for instance, the use of computer vision to diagnose autism not only raises fairness issues (due to biases in the training data of existing autism cases) but also justice concerns. “By adding technical and scientific authority to medical authority, people subject to medical contexts are not only not granted power but are even further disempowered, with even less legitimacy given to the patient’s voice” (Bennett and Keyes 2020).

If technologies like AI are to succeed in supporting equality amongst all citizens, they need to change their design approach and promote inclusive design principles (see Coleman et al. 2003; Goggin and Newell 2007), in addition to the earlier point about their use being justified in this particular context. Additionally, for equality to actually have a chance, the system needs to be open, meaning that it should be for the public as a whole (and without hindering barriers). Drawing inspiration from the Free and Open Source Software movements, a public interest AI system should be open source (to the extent possible), thus giving citizens the chance to validate it and repurpose it for other public interest projects.Footnote 7 Importantly, access not only promotes active participation of citizens but also helps the educational purpose of strengthening civic tech literacy.

Finally, we can also think of equality between generations, since as Offe (2012, p.678) points out, the validation of public interests is historically determined by future generations in retrospect. In the context of AI systems, this means to explicitly think about the environmental harms and sustainability of such systems. As Bender et al. (2021) argue models need to add enough public value to warrant their additional computational and environmental costs. One could similarly ask questions about the energy sources (Oberhaus 2019) used to power the cloud running the AI systems.

Public interest AI requires a deliberative and participatory design process

No team of developers, no matter how skillful, ethical, socially aware or diverse can determine what is in the public interest. That is not due to a lack of competence or willingness, but simply true by definition. Nevertheless, ethical awareness as well as diverse team structures are crucial for the AI design process to be successful (Gebru 2020). In agreement with the theoretical outline presented earlier (referring to Dewey, Bozeman, Held and Feintuck), we see the process of deliberation to be the only way to identify the public interest in a given case. Without public deliberation on the interests (and justifications) of different public representatives, one simply assumes the interest of others, which can lead to misunderstandings, hurtful misperceptions, and even a lack of acceptance or complete failure of a project.

The process of deliberation can take different (formal and informal) forms, depending on what suits a specific case: online documentation, city hall meetings, surveys, interviews, and bilateral conversations with diverse citizens, to name a few. Whatever the form it should let the interests of this public be heard and discussed as the public itself sees the need for it. In addition to the typical requirements gathering and testing measures used by project teams, there should be an openness towards citizen’s questions and opinions, and quite practically, a channel for direct contact.

To design a process of deliberation, the project initiators need to ask themselves: who is ‘the public’ regarding their specific project when aiming to serve the public interest. Dewey (1927, p.84) considered the public to be “those indirectly and seriously affected for good or for evil [who] form a group distinctive enough to require recognition and a name”. Regarding a specific AI case and its socio-technical application, the public concerned are the direct users of the systems (both professional and lay users), the data subjects that were part of the training data, and more broadly all humans that are affected by decisions derived from the system. This last point also includes second-order effects that indirectly influence a society as a whole.Footnote 8

To illustrate, let’s consider the case of an AI system to analyse and predict traffic flows and therefore assist the future planning of the city and its traffic. Designers of such a system should not only consider the public authorities and professional users (city planners, public service officials and architects) but also take into account the interests of different groups of citizens who will be affected by the planning decisions. If the face of the city changes dramatically certain groups of traffic users might be disadvantaged structurally and therefore need to have a voice in the design process.

The importance of participatory approaches is becoming more generally recognized, for instance using participation to improve the quality of datasets and avoiding bias in data (Ogolla and Gupta 2018), as well as bias in algorithms (Sloane et al. 2020). Sloane et al. (2020), however, rightly caution that participatory design is not a guarantee for a democratic process in itself. The authors warn of “participation washing”, when participation is used to obscure the “extractive nature of collaboration, openness and sharing, particularly in corporate contexts”. The authors point to pitfalls such as doing anecdotal participation, which simply codes structural inequality in a “top-down” manner into the results, or even worse, reducing participation to a performance, without actually including the recommendations by citizens. In a best-case scenario, every project aiming to serve the public interest should consider “participation as justice” (Sloane et al. 2020). This means considering the participating stakeholders as being experts in their domains, promoting regular communications, building trustful relationships, and in short, designing with instead of designing for the participants. We agree with Sloane et al. (2020) observation that “experts do not often have a good understanding of how to design effective participatory processes or engage the right stakeholders to achieve the desired outcomes”. It is a real challenge to translate the outcomes of deliberation and citizen participation into the actual development process of AI technology. The existing literature on participatory design in many fields (e.g., see Arnstein 1969; Schuler and Namioka 1993; Kuhn and Winograd 1996; Simonsen and Robertson 2013; Mainsah and Morrison 2014), and particularly the approaches by Christopher Alexander to use design patterns in architecture and urban planning (Alexander et al. 1977), which has also been applied by Gamma et al. to software engineering (Gamma et al. 1994), or Selloni’s (2017) approach to co-designing public services, seem relevant and helpful to further develop methods for the participatory design of AI. We believe that there is an urgent need for research in this area.

Public interest AI systems need to implement technical safeguards

Thus far we have laid out key principles and processes for public interest AI that adhere to democratic governance requirements. Given the technical nature of AI systems, these principles and processes need to be supplemented with technical safeguards. This is a large area of on-going research regarding how to embed and protect public values in AI systems (Hallensleben et al. 2020; Morley et al. 2020; as well as the ACM FAccT community 2020), and much more still needs to be done. We shall here make a few preliminary suggestions on bridging technical requirements with public interest principles, and leave an in-depth discussion for future work. Three concerns we would raise relate to (i) data quality and system accuracy (ii) data privacy (iii) and lastly safety and security.

  1. i.

    Data quality and system accuracy: Many data sources in the world contain some type of bias, be it due to historical disparities, measurement errors or other reasons (Friedman 1996; Barocas et al. 2019). These biases can lead to inaccurate predictions and decisions, which is a major issue for public interest AI systems that need to be built on a public justification and serve equality. In certain contexts, e.g. when an AI is supposed to make a prediction in a medical context, it is crucial that the AI system provides a high level of accuracy since false positives or false negatives might have devastating consequences (AI for People 2021).Footnote 9 A system that does not deliver the promised output and that for instance interferes with the privacy of citizens loses its public justification along with its function and therefore fails to serve a public interest. In fact, as we shall explain in detail in the SyRI example (in Sect. 4), the European courts have for these precise reasons disallowed the use of algorithmic systems that lacked accuracy or validity. Data sources can also have inherent limitations, due to their collection context, and understanding these limitations is critically important. We see the transparency about data sources, their exact use and documentation of their shortcomings as necessary to ensure public interest outcomes (Gebru et al. 2020).

  2. ii.

    Safeguarding data privacy: There are two key connections between data privacy and public interest AI. First, as many scholars have argued, privacy is a condition for the realization of an autonomous life (e.g. Roessler 2004), which again is a condition of citizens to engage freely in a social inquiry to determine a collective public interest in a process of deliberation and participation. Second, compliance with the forthcoming European AI Regulation and existing data protection and privacy laws worldwide is a baseline for any design in the public interest as outlined in our theory section, the accordance with rights and the rule of law is critical for creating outcomes that meet the public interest.

  3. iii.

    Monitoring system safety and security: In technical terms, it is crucial that the system design is safe and robust to ensure the purpose it is designed for (see CAHAI 2020, p.2). Malfunctions or unintended functions of the system, as well as technical weak spots that lead to security issues endanger the benefits a system promises and thereby affect the justification of the system overall (linking back to the system accuracy point). They can also obviously endanger public safety, which is itself a public interest. Security is a complex topic, having both technical and human aspects (Anderson 2020). A good starting point for security is to monitor failures and harms to decide where to place efforts.

Public interest AI systems need to be open to validation

The deliberative and participatory design process, along with the technical safeguards in place, needs to be open to the validation of others. There are two important reasons for this. The first is that despite best intentions, AI systems that deal with the public at large may cause unintentional societal harms. Some reasons were discussed in Sect. 3.3, and additionally, there is the effect of the ‘machine learning loop’ (Barocas et al. 2019): historical disparities and measurement processes errors that lead to self-fulfilling prophecies (that are invalid outcomes nevertheless). There are numerous documented cases where these problems have led to systems that inadvertently perpetuate existing stereotypes and disparities (O’Neil 2017; West et al. 2019), including unfortunately the (public) administrative decision-making context. Having a process and system (with sufficient documentationFootnote 10 and audits (CDEI 2020; Gebru et al. 2020)) where the outcome can be inspected and validated by third parties is necessary to identify these problems and resolve them.

The second reason relates to the fundamental democratic norm where all decisions (say of parliament or public officials) are documented and open to inspection by citizens at a later time. Similarly, this would be translated to public sector AI systems, and more generally, any technology or AI system that claims to be in the public interest. It speaks to the idea that democratic civil societies not only have the right to understand the workings of technology (which requires transparency and explainability) but also be able to validate its mechanisms to be democratic if they are claimed to serve the public interest.

The concept of ‘open to validation’, in our opinion, is the fundamental underlying reason behind pushes for transparency and explainability in AI ethics (e.g., see Larsson and Heintz 2020). Making it an explicit requirement has the benefit that transparency and explainability are not reduced to disconnected pieces or non-actionable information: they must in the end allow a holistic validation of the system’s outcomes (in comparison with the justification). Having a system that is open to validation will also lead to the often-quoted goal of ‘trustworthy AI’, but in a deeply democratic manner (that is trust happens through participation and validation not public relations).

Finally, openness to validation also relates to the principle of accountability, understood as the clear attribution of responsibility and liability. As mentioned in Sect. 3.1, the justification given for an AI system to work in the public interest is important. This justification needs to be scrutinized and validated by others (in terms of their political impact as well as the technical realization of the system), and as discussed in Sect. 3.3 citizen’s need to be able to give feedback on a system. Openness for validation thereby includes a direct channel to those accountable and capable of making changes to the system or even deciding to terminate its use.

Up to this point, we have outlined the theoretical basis for our understanding of the public interest (Sect. 2) and outlined a framework for public interest AI (Sect. 3). Next, we turn to cases of AI projects designed to serve the public interest.

Illustrative cases

We shall introduce two concrete cases of AI systems that have claimed to be in the public interest. One of them, the Dutch Public Sector SyRI Project, was struck down by the courts last year and is an example of a failure. The other, UNICEF’s Project Connect, appears to be a success. We investigate both cases through the lens of the public interest AI theoretical framework we have thus far developed, to exemplify the usefulness and practical applicability of our framework.

The dutch welfare fraud detection project SyRI

The SyRI project, short for “Systeem Risico Indicatie”, or “System Risk Indicator”, was deployed by the Dutch Ministry of Social Affairs between 2014 and 2019. The Ministry deployed SyRI on behalf of other administrative bodies to detect social benefit, welfare, and tax fraud. The system combined a multitude of data sources and targeted mainly poor neighborhoods in a number of Dutch cities (Bekker 2021). In February 2020, The Hague Court ruled that SyRI had to be stopped immediately because it violated Article 8 of the European Convention on Human Rights (ECHR), which protects the right to private and family life.Footnote 11 (The plaintiffs were a number of NGOs). The system raised “serious concerns” for the UN Special Rapporteur on extreme poverty and human rights, Professor Alston, while actually not proving to be helpful with fraud detection as aimed for (Alston 2019).

The interesting point about the SyRI decision is that neither the judges (nor the plaintiffs or the government) questioned the public interest in detecting fraud and protecting public funds.Footnote 12 The key problem from the court’s perspective was that in addition to the inappropriate use of citizen data, the developed system didn’t work. The latter, in our view, is in part due to the opacity in which SyRI was developed.

The most pressing problems with SyRI from a public interest AI perspective are as follows:

  • SyRI did not actually succeed in detecting fraud. According to the court ruling, it thus infringed on privacy rights for no public benefit. This questions the justification for the existence of SyRI.

  • SyRI did not serve, but rather undermined equality, by targeting specifically poor neighborhoods (which the courts found to be discriminatory). This was of course a decision made by the public institutions, and not inherent in the technical design, which underlines the point that AI is a socio-technical system.

  • The system, though shortly discussed and passed in both houses in parliament, was not openly developed, nor were any deliberative processes involving citizens or civil society organizations realized. This was because the Dutch Government argued details of the system could be used by fraudsters to game the system, an argument the courts disagreed with.Footnote 13 Ideally, the Ministry could have included researchers and representatives of civil society in the design process (in addition to the administrative bodies and tech consultants), which might have led to the system being scrapped earlier or designed very differently. Such a process would have required a much more transparent communication strategy from the Ministry of Social Affairs.

  • While not many technical details were disclosed by the Ministry (despite requests by plaintiffs and opposition parties), the Dutch Data Protection Authority at one point raised serious concerns with regard to the privacy safeguards (Bekker 2021).

  • There was clearly little openness for validation, for the reasons already articulated. Within a democracy, validation is necessary to ensure trust–which could have meant a review process by researchers or representatives of the public, especially including critical voices that might have early on scrutinized civil rights issues.

We believe that if the Ministry had followed a public interest AI approach, with a focus on equality at its heart, a deliberative and participatory design process, sufficient safeguards, and open to validation, then the resulting system would not have suffered the fate of SyRI. Nevertheless, this assumption at this stage is a hypothesis which we aim to test and challenge in further research.

The UNICEF project connect

Project ConnectFootnote 14 is a project aiming to map the internet connectivity of schools in the world (Yi et al. 2019). Its stated purpose (hence, justification) is to “to provide quality education and promote lifelong learning, listed as UN sustainable development goal 4 (SDG4), to ensure equal access to opportunity (SDG10) and eventually, to reduce poverty (SDG1) by retrieving accurate data about school locations and their connectivity status to the internet in countries, where ‘educational facilities’ records are often inaccurate, incomplete or non-existent” (Yi et al. 2019).

The project uses computer vision (a convolutional neural network pre-trained with ImageNet) to analyze satellite images for “school-shaped” buildings to “map new schools, validate the accuracy of existing school location data, and automatically update maps when school locations change in the future.” The project collaborates with government agencies (such as Ministries of Education and Ministries of Information and Communications Technology of various countries), as well as the private sector (mobile network operators, Internet service providers, and other tech companies). The output is an open-source dataset of schools and their telecommunication infrastructure. To determine the degree of internet connectivity, real-time internet measurement tools are run periodically. So far, the project has succeeded in adding accurate locations of 23,100 schools, in Kenya, Rwanda, Sierra Leone, Niger, Honduras, Ghana, Kazakhstan, and Uzbekistan. The project, its goals, achievements, partners, methods and funding are documented extensively on a website which also visualizes the mapping progress globally.

As far as we can tell from this documentation and exchange with the project team, we see this project as a positive example of implementing public interest AI principles (in line with our framework), for the following reasons:

  • First of all, it seems quite clear that the project's aims serve equality, by providing data to stakeholders who can then increase access to the internet and education. The use of AI seems reasonable, proportionate, and thereby justified in this case, since other methods of mapping or data retrieval would require more resources than computer vision for the massive amount of land under focus.

  • The project allows a process of deliberation by making detailed information publicly available and offering direct contacts. In some sense, it has realized a co-design approach: the many stakeholders and their expertise are combined for a shared goal. Nevertheless, it is not easy to evaluate if and how affected citizens may engage meaningfully with the project.

  • The project implements a data sharing and privacy framework with specific Child Data Protection Policies.Footnote 15 To validate its own mapping results, the project has in-house expert mappers who have experience in mapping and validating map objects in OpenStreetMap (OSM) communities.

The project provides a high degree of transparency and is open for validation to others, using an open-source tool,Footnote 16 and also providing the results of the project in an open dataset as well as in a peer-reviewed paper (see Yi et al. 2019). The project team responded immediately to the authors request to answer questions about the validation process of the data and the achieved impact.

As a general conclusion, we believe that the framework of public interest here proposed proves to be helpful to discuss in which way an AI system was designed and implemented and if it meets the goal to serve the public. In general, we noticed that in many cases the information given for cases that are supposedly serving the social good is rather slim and important factors are often opaque. The questions we asked the initiators in these cases (or researched in the given documentation) are necessary to come to a conclusion if AI is used to serve a public interest, for instance: Which stakeholders could participate in the design process? What impact was achieved? And how was the process open for validation?


Having introduced our public interest AI framework and examining it via two public examples, we conclude the paper by looking at the broader implications vis-a-vis the ethical AI principles, and discuss some remaining challenges and open questions.

The relation between public interest AI and the broader AI ethics debate

In the last decade, much has been published on the question of how to design AI to serve certain ethical principles (just to name a few: AI HLEG 2019; Floridi et al. 2020; Leslie 2019; ‘AI for People’). At the center of these approaches is the engineer, who bears the responsibility to follow ethical values and ensure their embedding in the technology (Simon et al. 2020; Umbrello and van de Poel 2021). Generally, there is a strong focus on the values of fairness, accountability, and transparency to ensure against biases (Eiband et al. 2018; or works presented at the ACM FAccT conference 2020). Another widely discussed topic is the design of explainability (e.g., Arya et al. 2019; Miller 2019; Wolf 2019; Liao et al. 2020). Acknowledging that embedded values in AI do have a crucial impact on how systems affect society (see van de Poel 2020), we nevertheless believe it is necessary to shift the focus from predefined values to the procedure of AI development and deployment, as we have laid out in this paper.

Generally, we are in agreement with most of the values and principles that most ethical guidelines for AI argue for (for an overview see Floridi and Cowls 2019; Jobin et al. 2019). Nevertheless, as other authors we are in doubt that ethical guidelines based on principles alone can develop a binding framework for trustworthy and ethical AI, in which compliance and impact can be monitored and validated (see Mittelstadt 2019, p.505). We believe that an approach focused on the public interest, with its strong connection to the rule of law, and being process and governance oriented, is more promising to bridge the gap between values, principles, and concrete AI implementation that leads to democratic outcomes.

One question that is often underrepresented in AI ethics guidelines is whether AI should be used at all. Powell (2021) and Gürses et al. (2020) highlight the societal consequences of the paradigm of optimization (in which AI plays a driving role). Powell (2021) argues for a right to minimal viable datafication, which means “seeking to employ decision-making strategies that may appear to be more costly on the surface but that leave space for different kinds of knowledge, as well as for data to decay over time, for frictions to be identified and addressed, and for different forms of democratic participation and accountability, including but not limited to data audit, sensing citizenship, and autonomous networking” (p.177). In agreement with these authors, the public interest principles we identified ask for a public justification regarding whether AI should be used in a case, and include an imperative to serve equality and human rights.

Although we advocate for a deliberative approach, we highly appreciate the internationally coordinated attempt to set boundaries and a clear legal framework for AI, for instance with the European Commission’s (2021) proposed AI Regulation or the Council of Europe’s CAHAI (2020) reports. Generally, the rule of law, after all, is itself in the public interest. But AI in the administrative practice shows that laws are not enough to ensure effective and democratically accepted outcomes. To achieve this, it is essential to understand the meaning of the public interest concept and to bring it to the forefront of AI projects aiming to serve the public.

It is important to highlight again the difference in the scope of projects that fall under the public interest as compared to the broader ethical AI discourse. In short, AI projects that primarily serve profit-maximization do not fall under the public interest. This is even when they are (hopefully) non-maleficent in nature, and have positive effects on the society and follow ethical AI guidelines. This is because, as we argued, public interest projects need to serve equality, which often counters private, profit-oriented interests. Additionally, profit-driven objectives are often counterproductive to a truly participatory design approach. As Sloane et al. (2020) point out “[in a corporate setting] justice can almost be seen as an oxymoron: given the extractive and oppressive capitalist logics and contexts of ML systems, it appears impossible to design ML products that are genuinely ‘just’ and ‘equitable’.” Specifically in those cases, where AI is not designed to serve the public interest but with profit-oriented interests at heart, general ethical guidelines are a necessary addition to upcoming regulations. In agreement with other scholars (Jobin et al. 2019, p.96), we believe that in such cases, AI ethics should be further harmonized in a collaborative effort amongst stakeholders to allow a binding character, and also embedded in a broader framework of ethical action of organizations (Lauer 2021).

The hype around artificial intelligence for social good is still ongoing and requires further debunking. In many discussions, the conclusion a project is “for good” is made too quickly, without proper consideration of important details and without the help of any established theoretical analysis. Even though the ‘good’ or the ‘public interest’ cannot be defined universally, democracies have established political agreements and institutions to define exactly this. As we have hopefully exemplified with this article, there are existing concepts, theories, discourses, and deliberative procedures available to guide us to pragmatic conclusions.

Open questions for further research

The concept of Public Interest AI raises interesting and new questions that require further research.

First of all, we think more work needs to be done, to determine which degree and which type of deliberation and co-design are necessary for AI projects, to deliver on the promise to serve the public interest. Similar to the position articulated by Sloane et al. (2020), we believe that more attention needs to be brought to successful and appropriate methods of participatory design overall, but particularly the step towards implementing the results is hard. As many attempts have shown, “design by committee” does not necessarily go well with creation. So, we propose it is an important (further research) question how to bridge this gap. On a more detailed level, we are interested to learn more about tools and methods to translate between participatory design and technical implementation.

Another related and important question regards the better understanding of the gap between the vision and the reality of open-source software for the public interest (and in particular public sector) AI. While we do hear voices in general agreement about open source being the goal for public service infrastructures, the reality seems to impose (to the best of our knowledge under-researched) obstacles toward the actual adoption. This is thus an open question also, in which scenarios, under which licenses, and to what degree is a commitment to open and free software necessary for public interest AI.

Finally, as a basis for more extensive research, we are releasing a survey and creating a dataset of public interest AI cases. We aim to identify cases in broad areas, including public administration, and test the (so far theoretical) guiding principles we have presented in this paper.


The hype around the potential of AI has inspired many AI for social good (AI4SG) projects to emerge, which potentially aim to fulfill a purpose that serves a public interest.Footnote 17 We believe that despite the current situation where we see more shortcomings than successful cases, the use of AI for a public interest is possible and necessary. We have argued that in the current academic and public debate the standards to make this assessment should be much higher and based on democratic considerations. The question, if and how AI serves a public interest is too essential and relevant for the future of our societies to answer it based on superficial insights or gut feeling—or to leave the answer to anyone group within society to make for all.

This article presents an approach which brings public interest theories and legal perspectives to the forefront of the argument, allowing a deeper analysis of relevant cases, and sketching an approach for public interest AI that focuses on democratic governance and a process of deliberation, validation and public justification. We hope and believe that the turn to the well-established concept of the public interest, and its rich underlying history in theory and practice, can bring great clarity to the debate about AI for the people.



  2. Bozeman introduces the important difference between the public interest and public values. According to him “a society’s public values are those providing normative consensus about (a) the rights, benefits, and prerogatives to which citizens should (and should not) be entitled: (b) the obligations of citizens to a society, the state and one another; and (c) the principles on which governments and policies should be based” (Bozeman 2007, p.13). It is important to note that these are not the same as accumulated individual values about things public, which could be gathered in opinion polls. In contrast, public values are held in common but are not necessarily embraced by all members of a public (Bozeman 2007, p.13). They might not follow the moral judgement of all and might change over time, as many historical examples of shifting values demonstrate (thinking of women’s rights for instance). They are an empirical manifestation of a society's values at a specific time. Public values can be reflected in many places in a society, but fundamental law, a given constitution or rulings of the high courts are the most usual suspects to indicate which values a public has set for itself at a specific time.

  3. Bernisson, who has researched the traditional Western European understanding of the public interest, similarly concludes that the assurance of equality, meaning equal rights granted to citizens, is the “crucial concept to implement” (Bernisson 2021, p.27).


  5. Nevertheless, the question remains if the public sector and public funding are the only options imaginable for the development of public interest AI, especially considering that a long-term maintenance might be necessary and costly over time. We believe that models like the commons (Helfrich et al. 2015; Stalder 2017; Dulong de Rosnay and Stalder 2020) or companies that act under the non-profit status have introduced resilient economic alternatives, that demonstrate how products or resources can be economically sustainable but still don’t serve private or commercial interests. The key difference for AI projects in the public interest should be that the funding model they are based on should protect from the possibility of conflicting interests regarding commercial or individual profits and the public interest at core.

  6. We will also work through an example of this principle in reasoning through public sector AI systems in Sect. 4.

  7. Existing models of commons in the realm of IT services are interesting to explore if one wants to ensure a collectively sustainable AI infrastructure. To which extent this can be realized, and what other measures can help to create public interest AI as a sustainable public good are important questions that would benefit from empirical research.

  8. In fact the current draft AI Regulation of the European Union explicitly mandates thinking about such second order effects when doing a risk-assessment for the AI system (European Commission 2021).


  10. One ideal form being open to validation is to use open-source and open-data projects, but this is not always possible due to for instance intellectual property and privacy reasons. In our opinion, this deviation should then be justified with alternative possibilities for the validation.

  11. District Court The Hague ECLI:NL:RBDHA:2020:865, Sect. 6.7.

  12. District Court The Hague ECLI:NL:RBDHA:2020:865, Sect. 6.4.

  13. See District Court The Hague ECLI:NL:RBDHA:2020:865, Sect. 6.49; also see Braun 2018.

  14. See, which is part of the UNICEF GIGA Initiative



  17. Sadly, many such projects never leave the lab and see a real-world application (Shi et al. 2020).


  • ACM FAccT (2021) CM conference on fairness, accountability, and transparency. In: ACM FAccT. Accessed 23 Apr 2021

  • AI for People (2021) Accuracy & robustness. In: AI for People. Accessed 28 Apr 2021

  • Alexander C, Ishikawa S, Silverstein M et al (1977) A Pattern Language: Towns, Buildings. Oxford University Press, New York, Construction

    Google Scholar 

  • Alston P (2019) Brief by the United Nations Special Rapporteur on extreme poverty and human rights as Amicus Curiae in the case of NJCM c.s. The Hague

  • Anderson R (2020) Security Engineering: A Guide to Building Dependable Distributed Systems, 3rd edn. Wiley, Indianapolis

    Book  Google Scholar 

  • Arnstein SR (1969) A ladder of citizen participation. J Am Inst Plann 35:216–224.

    Article  Google Scholar 

  • Arya V, Bellamy RKE, Chen P-Y, et al (2019) One Explanation Does Not Fit All: A Toolkit and Taxonomy of AI Explainability Techniques. arXiv:190903012 [csAI]

  • Barocas S, Hardt M, Narayanan A (2019) Fairness and Machine Learning.

  • Bekker S (2021) Fundamental Rights in Digital Welfare States: The Case of SyRI in the Netherlands. In: Spijkers O, Werner WG, Wessel RA (eds) Netherlands Yearbook of International Law 2019: Yearbooks in International Law: History, Function and Future. Asser Press, The Hague, T.M.C, pp 289–307

    Chapter  Google Scholar 

  • Bender EM, Gebru T, McMillan-Major A, Shmitchell S (2021) On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? In: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery, New York, NY, USA, pp 610–623

  • Benn SI, Peters RS (1959) Social principles and the democratic state. Allen & Unwin, London

    Google Scholar 

  • Bennett CL, Keyes O (2020) What is the point of fairness? Disability, AI and the Complexity of Justice. SIGACCESS Access Comput.

    Article  Google Scholar 

  • Bentley AF (1908) The process of government: a study of social pressures. University of Chicago Press, Chicago

    Google Scholar 

  • Bozeman B (2007) Public values and public interest counterbalancing economic individualism. Georgetown University Press, Washington, D.C.

    Google Scholar 

  • Braun I (2018) Risikobürger. In: AlgorithmWatch. Accessed 30 Apr 2021

  • CAHAI (2020) AD HOC committee on artificial intelligence. Feasibility study on a legal framework on AI design, development and application based on Council of Europe’s standards adopted by the CAHAI on 17 December 2020,

  • CDEI (2020) Review into bias in algorithmic decision-making. Centre for Data Ethics and Innovation

  • Cochran CE (1974) Political Science and “The Public Interest.” J Politics 36:327–355.

    Article  Google Scholar 

  • Coleman R, Keates S, Lebbon C, Clarkson PJ (eds) (2003) Inclusive design: design for the whole population. Springer Science & Business Media, London

    Google Scholar 

  • European Commission (2021) Proposal for a Regulation on a European approach for Artificial Intelligence

  • Dewey J (1927) The public and its problems: an essay in political inquiry, reissue edition (2016). Swallow Press, Athens, Ohio

    Google Scholar 

  • Downs A (1962) The public interest: its meaning in a democracy. Soc Res Int Q 29:1–36

    Google Scholar 

  • Dulong de Rosnay M, Stalder F (2020) Digital Commons. Internet Policy Review 9:4.

  • Eiband M, Schneider H, Bilandzic M et al (2018) Bringing Transparency Design into Practice. 23rd International Conference on Intelligent User Interfaces. ACM, Tokyo Japan, pp 211–223

    Chapter  Google Scholar 

  • Facebook (2021) FAQ. In: Facebook. Accessed 26 Apr 2021

  • Feintuck M (2004) ‘The Public Interest’ in Regulation. Oxford University Press, Oxford

    Book  Google Scholar 

  • Flathman RE (1966) The Public Interest, An Essay Concerning the Normative Discourse of Politics. John Wiley & Sons, Inc

  • Floridi L, Cowls J (2019) A Unified Framework of Five Principles for AI in Society. Harvard Data Sci Rev 1:1.

    Article  Google Scholar 

  • Floridi L, Cowls J, King TC, Taddeo M (2020) How to design AI for social good: seven essential factors. Sci Eng Ethics 26:1771–1796.

    Article  Google Scholar 

  • Friedman B (1996) Value-Sensitive Design. In: Value-Sensitive Design. Colby College and The Mina Institute

  • Gamma E, Helm R, Johnson R, et al (1994) Design Patterns: Elements of Reusable Object-Oriented Software, 1. Edition. Addison-Wesley Professional

  • Gebru T (2020) Race and Gender. In: Dubber MD, Pasquale F, Das S (eds) The Oxford Handbook of Ethics of AI. Oxford University Press, New York, pp 253–272

    Google Scholar 

  • Gebru T, Morgenstern J, Vecchione B, et al (2020) Datasheets for Datasets. arXiv:180309010 [cs]

  • Goggin G, Newell C (2007) The business of digital disability. Inf Soc 23:159–168.

    Article  Google Scholar 

  • Gordon A (2013) Public interest and the three dimensions of judicial review. Northern Ireland Legal Quarterly 64:125–142

    Google Scholar 

  • Gunn JAW, Kenyon JP (1969) Politics and the Public Interest in the Seventeenth Century. Am Hist Rev 75:488.

    Article  Google Scholar 

  • Gürses S, Overdorf R, Balsa E (2020) POTs: Protective Optimization Technologies. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency 177–188.

  • District Court The Hague ECLI:NL:RBDHA:2020:1878, Rechtbank Den Haag, C-09–550982-HA ZA 18–388 (English). In: Uitspraken.rechtspraak. Accessed 28 Apr 2021

  • Hallensleben S, Hustedt C, Fetic L (2020) From Principles to Practice An interdisciplinary framework to operationalise AI ethics. Bertelsmannstiftung e.V, Berlin

    Google Scholar 

  • Hao K (2021) How Facebook got addicted to spreading misinformation. In: MIT Technology Review. Accessed 19 Apr 2021

  • Held V (1970) The Public Interest and Individual Interests. Basic Books, New York

    Google Scholar 

  • Helfrich S, Bollier D, Heinrich Böll Stiftung (eds) (2015) Die Welt der Commons: Muster gemeinsamen Handelns, 1st edn. transcript, Bielefeld

  • High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy AI

  • Jobin A, Ienca M, Vayena E (2019) The global landscape of AI ethics guidelines. Nat Mach Intell 1:389–399.

    Article  Google Scholar 

  • Kalluri P (2020) Don’t ask if artificial intelligence is good or fair, ask how it shifts power. Nature 583:169–169.

    Article  Google Scholar 

  • Kennedy G (1959) The process of evaluation in a democratic community. J Philos 56:253–263.

    Article  Google Scholar 

  • Keyes O (2020) Automating autism: Disability, discourse, and Artificial Intelligence. J Sociotech Crit 1:1–31.

    Article  Google Scholar 

  • Kuhn S, Winograd T (1996) Participatory Design. In: Winograd T, Bennett J, Young LD, Hartfield B (eds) Bringing Design to Software. Addison-Wesley, New York

  • Larsson S, Heintz F (2020) Transparency in artificial intelligence. Internet Policy Review 9:2.

  • Lauer D (2021) You cannot have AI ethics without ethics. AI Ethics 1:21–25.

    Article  Google Scholar 

  • Leslie D (2019) Understanding Artificial Intelligence Ethics and Safety: A Guide for the Responsible Design and Implementation of AI Systems in the Public Sector. Social Science Research Network, Rochester, NY

  • Liao QV, Gruen D, Miller S (2020) Questioning the AI: Informing Design Practices for Explainable AI User Experiences. In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. pp 1–15

  • Mainsah H, Morrison A (2014) Participatory design through a cultural lens: insights from postcolonial theory. In: Proceedings of the 13th Participatory Design Conference on Short Papers, Industry Cases, Workshop Descriptions, Doctoral Consortium papers, and Keynote abstracts - PDC ’14 - volume 2. ACM Press, Windhoek, Namibia, pp 83–86

  • McFadden DB (1997) Antitrust and Communications: Changes After the Telecommunications Act of 1996. Federal Communications Law Journal 49:17

    Google Scholar 

  • Meyer K (2019) Von Moonshots und Prototypen, oder “Public Interest Tech”– What goes up must trickle down. In: Medium. Accessed 21 Jan 2021

  • Meynhardt T (2019) Value creation in the eyes of society. Public Value Deepening, Enriching, and Broadening the Theory and Practice. Routledge, New York, pp 5–23

    Chapter  Google Scholar 

  • Miller T (2019) Explanation in artificial intelligence: Insights from the social sciences. Artif Intell 267:1–38.

    MathSciNet  Article  MATH  Google Scholar 

  • Mittelstadt B (2019) Principles alone cannot guarantee ethical AI. Nat Mach Intell 1:501–507.

    Article  Google Scholar 

  • Morley J, Floridi L, Kinsey L, Elhalal A (2020) From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci Eng Ethics 26:2141–2168.

    Article  Google Scholar 

  • O’Neil C (2017) Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, 1st edn. Penguin, London

    MATH  Google Scholar 

  • Oberhaus D (2019) Amazon, Google, Microsoft: Here’s Who Has the Greenest Cloud. In: Wired. Accessed 19 Apr 2021

  • Offe C (2012) Whose Good is the Common Good? Philos Soc Crit 38:665–684.

    Article  Google Scholar 

  • Ogolla S, Gupta A (2018) Inclusive Design. Methods to ensure a high degree of participation in Artificial Intelligence (AI) systems. In: University of Oxford Connected Life 2018 – Conference Proceedings,. Oxford, p 12

  • Pasquale F (2020) New Laws of Robotics: Defending Human Expertise in the Age of AI. Harvard University Press, Cambridge

    Book  Google Scholar 

  • Powell AB (2021) Undoing optimization: civic action in smart cities. Yale University Press, New Haven

    Book  Google Scholar 

  • Roessler B (2004) The Value of Privacy, 1. Edition. Polity, Cambridge, UK ; Malden, MA

  • Sable C (2012) Dewey, democracy, and democratic experimentalism. Contemp Pragmatism 9:35–55.

    Article  Google Scholar 

  • Schneier B (2019) Cybersecurity for the Public Interest. IEEE Security Privacy 17:84–83.

    Article  Google Scholar 

  • Schubert G (1960) The Public Interest: A Critique of the Theory of a Political Concept. The Free Press, Glencoe

    Google Scholar 

  • Schuler D, Namioka A (eds) (1993) Participatory design: principles and practices. L. Erlbaum Associates, Hillsdale, N.J

  • Selloni D (2017) CoDesign for Public-Interest Services. Springer International Publishing, Cham

    Book  Google Scholar 

  • Shi ZR, Wang C, Fang F (2020) Artificial Intelligence for Social Good: A Survey. Preprint from arXiv:200101818 [cs]

  • Simon J, Wong P-H, Rieder G (2020) Algorithmic bias and the value sensitive design approach. Internet Policy Rev 9:4.

    Article  Google Scholar 

  • Simonsen J, Robertson T (eds) (2013) Routledge International Handbook of Participatory Design. Routledge, New York

    Google Scholar 

  • Sloane M, Moss E, Awomolo O, Forlano L (2020) Participation is not a Design Fix for Machine Learning. arXiv:200702423 [cs]

  • Sorauf FJ (1957) The public interest reconsidered. J Polit 19:616–639.

    Article  Google Scholar 

  • Stalder F (2017) The Digital Condition, 1. edition. Polity, Cambridge, UK ; Medford, MA

  • Umbrello S, van de Poel I (2021) Mapping value sensitive design onto AI for social good principles. AI Ethics.

    Article  Google Scholar 

  • United Nations (1948) Universal declaration of human rights

  • van de Poel I (2020) Embedding values in artificial intelligence (AI) systems. Mind Mach 30:385–409.

    Article  Google Scholar 

  • von der Pfordten D (2008) Zum Begriff des Gemeinwohls. In: von Alemann U, Merten H, Morlok M (eds) Gemeinwohl und politische Parteien, 1. Nomos Verlag, pp 22–37

    Chapter  Google Scholar 

  • West SM, Whittaker M, Crawford K (2019) Discriminating Systems. Gender, Race and Power in AI. AI Now Institute 33

  • Wolf CT (2019) Explainability scenarios: towards scenario-based XAI design. In: Proceedings of the 24th International Conference on Intelligent User Interfaces. ACM, Marina del Ray California, pp 252–257

  • Yi Z, Zurutuza N, Bollinger D, García-Herranz M, Kim D (2019) Towards equitable access to information and opportunity for all: mapping schools with high-resolution Satellite Imagery and Machine Learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (CVPR).

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Theresa Züger.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Züger, T., Asghari, H. AI for the public. How public interest theory shifts the discourse on AI. AI & Soc (2022).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI:


  • Artificial intelligence
  • Public interest
  • Deliberation
  • Democratic governance
  • AI ethics