With this pragmatic-idealist notion of the public interest in mind, we now want to examine the process of AI development and deployment. Our goal here is to formulate requirements which need to be met to speak of an AI system as serving the public interest. To do so systematically, we will raise and answer key questions in connection to our theoretical outline which are namely (1) a public (not profit-oriented) justification, (2) serving equality, (3) deliberation/ co-design process, (4) following key technical standards (5) and openness for validation.
Any public interest AI system needs a public (not profit-oriented) justification
For AI to actually serve the public interest, we believe that first of all a justification to the public is necessary, to argue why the technology is not developed for the mere sake of innovation or commercial benefits but to serve a common public interest. The entity considering an AI system to be part of a solution in a social, policy or other public context, needs to present an argument to fellow citizens, giving the reasons how the system will tackle and improve the given issue and why it's the best solution in consideration of the alternatives. The reasons given need to be based on the democratic argument of the public concerned (which might be rights formulated in a constitution or other laws or other socially agreed on common goals). In practice, the question of whether using AI is for a given case the best solution is often tricky, and one that cannot be fully answered in advance. Nevertheless, a preliminary discussion and best-effort answer are necessary to ensure the use of AI in general and the spending of resources, in particular, are justified. Many functions in society simply have no public justification to ever be automated and many issues cannot be solved by technical means of optimizations.
Another important consideration connected to this justification is that it has to be a public interest justification, which (as mentioned in the theory section) differs from private and purely economic interests. It should be noted that some scholars from within the economic discipline might contest this point and state that profitability contributes to the public interest (e.g., see Meynhardt 2019). Most public interest scholars, especially those coming from a background of law or philosophy, however, have argued quite strongly against equating the public interest with the pursuit of private economic interests (e.g., see Feintuck 2004). There are numerous well-established examples of market failures within the economic discipline, where individual commercial interests and the public interest at large diverge, such as monopoly pricing, the building of public goods such as roads and schools, and tackling environmental pollution. As philosopher von der Pfordten (2008) states, the liberal-economic imagination, while still relevant, has been disproved from the perspective of psychology, where numerous empirical studies have shown that the ‘cold rational man’ doesn’t exist in theory and practice.
Following this distinction, this implies for public interest AI that many of the existing AI projects—even if they are proclaimed to serve ‘the common or social good’—are out of scope to serve the public interests, if their objectives are primarily profit-oriented.Footnote 5
Public interest AI should serve equality and human rights
As we have so far argued, an AI system that is in the public interest needs to articulate a public and socially aware justification for its development and deployment. We can go one step further, taking into account the legal discourse, and state that such an AI system should serve equality and human rights (and at a minimum not undermine it).
Equality is related to the commonly discussed ethical AI principle of fairness, and the related goals of reducing bias in datasets and algorithms (e.g., AI HLEG 2019; Leslie 2019; Floridi et al. 2020; AI for People 2021). However, what we conceive goes deeper than making sure a particular sub-system does not discriminate in outcomes (between races, genders, and other societal groups). The more fundamental question should also be asked of whether an AI system should even exist in the first place, in particular when considering how it influences power relations in society. It is important for the public interest to avoid outcomes that—despite presenting a technically working solution—go against justice or shift power in an unwanted direction.
This deeper understanding of equality than bias is important as it touches on the criticism of ‘big tech’ reducing 'Responsible AI' to only fairness and bias (Hao 2021). For instance, let’s assume for the sake of this example that Facebook would argue to act in the public interest with its mission to “to give people the power to build community and bring the world closer together” (Facebook 2021). Facebook’s newsfeed algorithm results in misinformation and manipulative advertisements but being shown to different groups in equal amounts, it is technically not biased (while the situation remains undesirable). If we instead ask whether this subsystem enhances equality as a whole in society (that is among user groups and also advertisers and the platform itself), then we would reach the conclusion that the newsfeed algorithm (as designed and deployed by just Facebook) does not serve equality.Footnote 6
The inequalities caused by AI systems also apply to persons with disabilities. Keyes (2020) illustrate in their research how, for instance, the use of computer vision to diagnose autism not only raises fairness issues (due to biases in the training data of existing autism cases) but also justice concerns. “By adding technical and scientific authority to medical authority, people subject to medical contexts are not only not granted power but are even further disempowered, with even less legitimacy given to the patient’s voice” (Bennett and Keyes 2020).
If technologies like AI are to succeed in supporting equality amongst all citizens, they need to change their design approach and promote inclusive design principles (see Coleman et al. 2003; Goggin and Newell 2007), in addition to the earlier point about their use being justified in this particular context. Additionally, for equality to actually have a chance, the system needs to be open, meaning that it should be for the public as a whole (and without hindering barriers). Drawing inspiration from the Free and Open Source Software movements, a public interest AI system should be open source (to the extent possible), thus giving citizens the chance to validate it and repurpose it for other public interest projects.Footnote 7 Importantly, access not only promotes active participation of citizens but also helps the educational purpose of strengthening civic tech literacy.
Finally, we can also think of equality between generations, since as Offe (2012, p.678) points out, the validation of public interests is historically determined by future generations in retrospect. In the context of AI systems, this means to explicitly think about the environmental harms and sustainability of such systems. As Bender et al. (2021) argue models need to add enough public value to warrant their additional computational and environmental costs. One could similarly ask questions about the energy sources (Oberhaus 2019) used to power the cloud running the AI systems.
Public interest AI requires a deliberative and participatory design process
No team of developers, no matter how skillful, ethical, socially aware or diverse can determine what is in the public interest. That is not due to a lack of competence or willingness, but simply true by definition. Nevertheless, ethical awareness as well as diverse team structures are crucial for the AI design process to be successful (Gebru 2020). In agreement with the theoretical outline presented earlier (referring to Dewey, Bozeman, Held and Feintuck), we see the process of deliberation to be the only way to identify the public interest in a given case. Without public deliberation on the interests (and justifications) of different public representatives, one simply assumes the interest of others, which can lead to misunderstandings, hurtful misperceptions, and even a lack of acceptance or complete failure of a project.
The process of deliberation can take different (formal and informal) forms, depending on what suits a specific case: online documentation, city hall meetings, surveys, interviews, and bilateral conversations with diverse citizens, to name a few. Whatever the form it should let the interests of this public be heard and discussed as the public itself sees the need for it. In addition to the typical requirements gathering and testing measures used by project teams, there should be an openness towards citizen’s questions and opinions, and quite practically, a channel for direct contact.
To design a process of deliberation, the project initiators need to ask themselves: who is ‘the public’ regarding their specific project when aiming to serve the public interest. Dewey (1927, p.84) considered the public to be “those indirectly and seriously affected for good or for evil [who] form a group distinctive enough to require recognition and a name”. Regarding a specific AI case and its socio-technical application, the public concerned are the direct users of the systems (both professional and lay users), the data subjects that were part of the training data, and more broadly all humans that are affected by decisions derived from the system. This last point also includes second-order effects that indirectly influence a society as a whole.Footnote 8
To illustrate, let’s consider the case of an AI system to analyse and predict traffic flows and therefore assist the future planning of the city and its traffic. Designers of such a system should not only consider the public authorities and professional users (city planners, public service officials and architects) but also take into account the interests of different groups of citizens who will be affected by the planning decisions. If the face of the city changes dramatically certain groups of traffic users might be disadvantaged structurally and therefore need to have a voice in the design process.
The importance of participatory approaches is becoming more generally recognized, for instance using participation to improve the quality of datasets and avoiding bias in data (Ogolla and Gupta 2018), as well as bias in algorithms (Sloane et al. 2020). Sloane et al. (2020), however, rightly caution that participatory design is not a guarantee for a democratic process in itself. The authors warn of “participation washing”, when participation is used to obscure the “extractive nature of collaboration, openness and sharing, particularly in corporate contexts”. The authors point to pitfalls such as doing anecdotal participation, which simply codes structural inequality in a “top-down” manner into the results, or even worse, reducing participation to a performance, without actually including the recommendations by citizens. In a best-case scenario, every project aiming to serve the public interest should consider “participation as justice” (Sloane et al. 2020). This means considering the participating stakeholders as being experts in their domains, promoting regular communications, building trustful relationships, and in short, designing with instead of designing for the participants. We agree with Sloane et al. (2020) observation that “experts do not often have a good understanding of how to design effective participatory processes or engage the right stakeholders to achieve the desired outcomes”. It is a real challenge to translate the outcomes of deliberation and citizen participation into the actual development process of AI technology. The existing literature on participatory design in many fields (e.g., see Arnstein 1969; Schuler and Namioka 1993; Kuhn and Winograd 1996; Simonsen and Robertson 2013; Mainsah and Morrison 2014), and particularly the approaches by Christopher Alexander to use design patterns in architecture and urban planning (Alexander et al. 1977), which has also been applied by Gamma et al. to software engineering (Gamma et al. 1994), or Selloni’s (2017) approach to co-designing public services, seem relevant and helpful to further develop methods for the participatory design of AI. We believe that there is an urgent need for research in this area.
Public interest AI systems need to implement technical safeguards
Thus far we have laid out key principles and processes for public interest AI that adhere to democratic governance requirements. Given the technical nature of AI systems, these principles and processes need to be supplemented with technical safeguards. This is a large area of on-going research regarding how to embed and protect public values in AI systems (Hallensleben et al. 2020; Morley et al. 2020; as well as the ACM FAccT community 2020), and much more still needs to be done. We shall here make a few preliminary suggestions on bridging technical requirements with public interest principles, and leave an in-depth discussion for future work. Three concerns we would raise relate to (i) data quality and system accuracy (ii) data privacy (iii) and lastly safety and security.
Data quality and system accuracy: Many data sources in the world contain some type of bias, be it due to historical disparities, measurement errors or other reasons (Friedman 1996; Barocas et al. 2019). These biases can lead to inaccurate predictions and decisions, which is a major issue for public interest AI systems that need to be built on a public justification and serve equality. In certain contexts, e.g. when an AI is supposed to make a prediction in a medical context, it is crucial that the AI system provides a high level of accuracy since false positives or false negatives might have devastating consequences (AI for People 2021).Footnote 9 A system that does not deliver the promised output and that for instance interferes with the privacy of citizens loses its public justification along with its function and therefore fails to serve a public interest. In fact, as we shall explain in detail in the SyRI example (in Sect. 4), the European courts have for these precise reasons disallowed the use of algorithmic systems that lacked accuracy or validity. Data sources can also have inherent limitations, due to their collection context, and understanding these limitations is critically important. We see the transparency about data sources, their exact use and documentation of their shortcomings as necessary to ensure public interest outcomes (Gebru et al. 2020).
Safeguarding data privacy: There are two key connections between data privacy and public interest AI. First, as many scholars have argued, privacy is a condition for the realization of an autonomous life (e.g. Roessler 2004), which again is a condition of citizens to engage freely in a social inquiry to determine a collective public interest in a process of deliberation and participation. Second, compliance with the forthcoming European AI Regulation and existing data protection and privacy laws worldwide is a baseline for any design in the public interest as outlined in our theory section, the accordance with rights and the rule of law is critical for creating outcomes that meet the public interest.
Monitoring system safety and security: In technical terms, it is crucial that the system design is safe and robust to ensure the purpose it is designed for (see CAHAI 2020, p.2). Malfunctions or unintended functions of the system, as well as technical weak spots that lead to security issues endanger the benefits a system promises and thereby affect the justification of the system overall (linking back to the system accuracy point). They can also obviously endanger public safety, which is itself a public interest. Security is a complex topic, having both technical and human aspects (Anderson 2020). A good starting point for security is to monitor failures and harms to decide where to place efforts.
Public interest AI systems need to be open to validation
The deliberative and participatory design process, along with the technical safeguards in place, needs to be open to the validation of others. There are two important reasons for this. The first is that despite best intentions, AI systems that deal with the public at large may cause unintentional societal harms. Some reasons were discussed in Sect. 3.3, and additionally, there is the effect of the ‘machine learning loop’ (Barocas et al. 2019): historical disparities and measurement processes errors that lead to self-fulfilling prophecies (that are invalid outcomes nevertheless). There are numerous documented cases where these problems have led to systems that inadvertently perpetuate existing stereotypes and disparities (O’Neil 2017; West et al. 2019), including unfortunately the (public) administrative decision-making context. Having a process and system (with sufficient documentationFootnote 10 and audits (CDEI 2020; Gebru et al. 2020)) where the outcome can be inspected and validated by third parties is necessary to identify these problems and resolve them.
The second reason relates to the fundamental democratic norm where all decisions (say of parliament or public officials) are documented and open to inspection by citizens at a later time. Similarly, this would be translated to public sector AI systems, and more generally, any technology or AI system that claims to be in the public interest. It speaks to the idea that democratic civil societies not only have the right to understand the workings of technology (which requires transparency and explainability) but also be able to validate its mechanisms to be democratic if they are claimed to serve the public interest.
The concept of ‘open to validation’, in our opinion, is the fundamental underlying reason behind pushes for transparency and explainability in AI ethics (e.g., see Larsson and Heintz 2020). Making it an explicit requirement has the benefit that transparency and explainability are not reduced to disconnected pieces or non-actionable information: they must in the end allow a holistic validation of the system’s outcomes (in comparison with the justification). Having a system that is open to validation will also lead to the often-quoted goal of ‘trustworthy AI’, but in a deeply democratic manner (that is trust happens through participation and validation not public relations).
Finally, openness to validation also relates to the principle of accountability, understood as the clear attribution of responsibility and liability. As mentioned in Sect. 3.1, the justification given for an AI system to work in the public interest is important. This justification needs to be scrutinized and validated by others (in terms of their political impact as well as the technical realization of the system), and as discussed in Sect. 3.3 citizen’s need to be able to give feedback on a system. Openness for validation thereby includes a direct channel to those accountable and capable of making changes to the system or even deciding to terminate its use.
Up to this point, we have outlined the theoretical basis for our understanding of the public interest (Sect. 2) and outlined a framework for public interest AI (Sect. 3). Next, we turn to cases of AI projects designed to serve the public interest.