Introduction

Especially in the last decade, several scholars, policymakers and organizations in the European Union (EU) have turned their attention to the ethics of (trustworthy and human-centric) Artificial Intelligence (AI). So far, scholarly works have gone in two main directions. On the one hand, some scholars have focused on which ethical principles are the most suited for this field and how they should be operationalized in practice (Jobin et al., 2019; Sartor, 2020), often looking at the precedent of bioethics (Floridi et al., 2018; Morley et al., 2021). On the other hand, other scholars have been skeptical of these invocations of AI ethics for several reasons, from ethics washing (Wagner, 2018) to problems of representation or checks and balances (Delacroix & Wagner, 2021; Nemitz, 2018; van Dijk et al., 2021; Yeung et al., 2020). By contrast, there has been little reflexivity on (1) the history of the ethics of AI as an institutionalized phenomenonFootnote 1 and (2) the comparison to similar episodes of “ethification”Footnote 2 in other fields, to highlight common (unresolved) challenges.Footnote 3

For the scope of this article, institutionalization refers to the creation of bodies, expert groups or committees with administrative and consultive functions, as a method for decision-making in controversial issues involving science and technology. The focus is on institutionalization in more “governmental” terms, looking at initiatives of some prominent EU institutions and expert groups.Footnote 4 In these contexts, ethics has had an increasing influence on the governance and regulation of emerging technologies at the EU level. The decisions, opinions and guidelines of ethics expert bodies have had consequences within the landscape of innovation governance. This warrants a critical check, both regarding the institutional aspects and the narratives about the history of the institutionalization process of ethics.

The dominant narrative on AI ethics, backed up not only by some scholars, but also by political institutions such as the European Commission (EC), is that quick digital technological developments are to some extent inevitable and/or necessary to address societal challenges, since they bring about terrific opportunities, but they also pose new unpredictable risks. Therefore, a mix of ex-ante and ex-post approaches, often, but not always, grouped under the label of “ethics”,Footnote 5 that consider the most desirable paths for technological developments and assess their social impacts, are deemed necessary to govern emerging technologies and mitigate risks and harms to individuals. Many of these approaches have become, throughout the years and to different degrees and in different ways, institutionalized.

In the field of ethics of AI, this narrative can be observed in the European landscape in the documents and initiatives of the European Commission’s Robotics and AI Unit at the Directorate-General for Communications Networks, Content and Technology (DG Connect) (European Commission, 2018b, 2018a) and European Parliament’s Committee on Legal Affairs (JURI) on AI (García del Blanco, 2020), but also in the work of ethical advisory bodies like the European Group on Ethics in Science and new technologies (EGE) (European Group on Ethics in Science and New Technologies, 2018) or of expert groups created ad hoc to address ethical issues around the digital domain or AI, such as the EDPS EAG (European Data Protection Supervisor Ethics Advisory Group, 2018) or the AI HLEG (High-Level Expert Group on Artificial Intelligence, 2019a).

However, this narrative seems to overlook certain aspects of the history of the institutionalization of ethics both in the US and the EU. Science and Technology Studies(STS)-informed perspectives have highlighted additional explanations for the rise of institutionalized ethics. Although there is no agreement on historical and sociological accounts of institutionalized ethics, since these phenomena are all still too recent (Eckenwiler & Cohn, 2007), STS scholars have described the ways institutions and organizations have long resorted to ethics in other fields, primarily in the life sciences (Jasanoff, 2011; Tallacchini, 2009), by looking at what the actors discursively perform and construct as ethics and by analysing the institutional settings in which different interests and strategies are pursued (van Dijk et al., 2021). Examples are the need for governments to gain more control of scientific research (e.g. bioethics or Technology Assessment (TA) in the US) or to bring institutions closer to the public on controversies related to emerging technologies (e.g. in the EU, Jecker et al., 1997).

Building on these insights, this article analyzes how different approaches, i.e., bioethics, TA, Ethical Legal and Social (ELS) research and Responsible Research and Innovation (RRI), followed one another, often “in the name of ethics”, to address previous criticisms and/or to legitimate certain scientific and technological research programs. It focuses on how some historical and sociological accounts (Evans, 2012; Jecker et al., 1997; Jonsen, 1998; Rothman, 1991) of these approaches can provide insights into present challenges to the ethics of AI related to methodological issues, mobilization of expertise and public participation. This brief history does not only highlight disruptive changes but also continuity, i.e. how, in practice and partially in contrast with the proclaims and visions of some programmatic documents, different ethical approaches and the ethics of AIFootnote 6still co-exist and draw on each other.

The structureFootnote 7 of the article is the following. In Section 2, an overview of traditional bioethics in the US will be presented,Footnote 8 starting from its origin in the 1960s. The political and methodological context around its birth will be sketched and compared to that of Europe in the early 1990s. Section 3 will summarize the history of TA, looking into the rise and fall of the Office of Technology Assessment (OTA) in the US and into the several initiatives that took inspiration from it in Europe since the 1980s. Section 4 will sketch the spread of ELS- programs in the US and EU. Each Section will end with a subsection (2.3, 3.3 and 4.3) in which “older” forms of institutionalized ethics are compared to the current AI ethics. Section 5 will conclude and provide some recommendations.

Common morality and “principlism” bioethics

The origins in the US: public scandals and the birth of bioethics

The term “bioethics” was revivedFootnote 9 in the US by biochemist Van Rensselaer Potter (1911–2001) as an ethical reflection on human, animal and natural life triggered by scientific and technological developments (Potter, 1970). In the same years, it is André Hellegers, physician and theology professor (1926–1979), and later the Georgetown model of bioethics,Footnote 10 however, that conceived bioethics in a narrower sense, more related to “medical ethics”.Footnote 11

Bioethics in this narrow sense provided a new language that could address the problems resulting from a series of medical research scandals in the 1960s, when it was more and more questioned whether scientists should have the authority or decisional power over the ethics of their experiments. After public scandals appeared in the pressFootnote 12 and discussions took place in the Senate, the US government adopted the National Research Act, which established in 1974 “The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research” (which lasted until 1978; hereinafter “the National Commission”).Footnote 13

The National Commission published the Belmont Report and the Institutional Review Board (IRB) Report. The IRB report endorsed the creation of IRBs, independent ethical committees to peer review publicly funded research,Footnote 14 later legally established in 1991, the Federal Policy for the Protection of Human Subjects, also known as the “Common Rule”.Footnote 15 The Belmont report, published in 1979, identified three basic principles for human research that were later given a scholarly/philosophical foundation, i.e. respect for persons (linked to informed consent), beneficence (linked to risk-benefit analysis) and justice.Footnote 16 Such principles were not groundbreaking or new for scientists, but rather “a post hoc philosophical backfilling of justifications for practices scientists had supposedly endorsed for many years” (Faden & Beauchamp, 1986, p. 216).

Philosophical thought, as a form of secularized thinking (as opposed to theology), but still “external” and critical to science, eventually influenced the National Commission and a particular philosophy provided a background for the new profession of bioethics. Philosophers Tom Beauchamp and James Childress co-authored a textbook (Beauchamp & Childress, 2019) that formalized the principles of the Belmont report in a coherent philosophical theory, later called also “principlism”.

Also referred to as the common morality principle, this method has become the reference point for bioethics professionals, and not primarily for its academic excellence or internal coherence. The method, for instance, was endorsed by the US government, being attractive to government’s officials for the claim to represent common morality (Evans, 2012, p. 57) and speaking in the name of the values of the people. It was a good fit with American law (Evans, 2012, p. 48), having an appealing basic structure ensuring simplification of decision making as opposed to disorderly or somehow arbitrary systems used before, and with bureaucratic authorities in health care institutions. It provided an opportunity to increase trust, to counter the decline of trust in physicians, also due to the debates and scandals about human experiments and their coverage by the press (Rothman, 1991, p. 61). Finally, academically, it was a middle ground between the two competing moral theories of deontology and consequentialism and, in line with utilitarian considerations (i.e. balancing risks and benefits), it fits with a pluralist society and market-oriented framing of regulatory issues on biotechnology (Jasanoff, 2005, p. 177).

The origins of bioethics in the EU (1991–1994)

Bioethics in the US was more a response to controversies questioning the authority of scientists, focusing on their individual responsibility because of big research scandals that attracted public attention. In the EU, by contrast, its history is related to solving the problem of democratic deficit and building a European identity (Jasanoff, 2005), in a moment of transition from the economic to the political unification of the European Community with the Maastricht Treaty of 1993 (Tallacchini, 2015, p. 164). In particular, the 1990s turn to ethics by the EC took place in a time of political clash between the vision of economic growth and competitive development held by the EC and the resistance or opposition of EU citizens and Member States (Busby et al., 2008, p. 804).Footnote 17 In this historical context, ethics helped to mitigate the friction between the EC, EU citizens and EU Member States in two ways.

First, to bring EU institutions closer to the public,Footnote 18 ethics offered a way out of the impasse, for example, between a clear Research & Development (R&D) agenda for further industrializing agriculture, and the risks to the environment perceived by citizens, while the official policy language downplayed value judgments on such risks, presenting them as “objective” science (Levidow & Carr, 1997). These problems were recognized in an EC communication in 1991 (European Commission, 1991), where ethics is associated with the need to avoid uncertainty and confusion in the public debate. In this communication, the EC sketched the role of what ethics should have been (European Commission, 1991, p. 16), i.e. (1) opening up the discussion to “interested parties” and (2) enabling experts to participate in the legislative process. Regarding the latter point, a few months later, in 1991, the Group of Advisers on the Ethical Implications of Biotechnology (GAEIB) was established, made of six members including scientists, lawyers, philosophers and theologians, to advise the EC and identify ethical issues raised by biotechnologies, as well as to inform the public.Footnote 19

Second, ethics allowed the EU legislator to address value-related issues in life sciences despite these being usually regulated at a national level, avoiding conflicts and negotiating between diverging positions of EU Member States.Footnote 20 The role of the GAEIB was thus also of political integration, making the discrepancies regarding moral visions at the EU and Member States’ level, as well as disharmony in the respective legal frameworks, coexist (Tallacchini, 2009, p. 293). One of the main contrasts in life sciences in the 1990s, for example, regarded the opposition to some forms of genetic engineering research, for instance in Germany, in contrast to more accepting and experimental attitudes, such as in the UK (Jasanoff, 2005).

The criticisms by the European Parliament Footnote 21 contributed to the replacement of the GAEIB by the EGE in 1997, which, until today,Footnote 22 has been tasked to cover all areas of application of science and technology, not just biotechnology. The list of opinions of GAEIB and EGE is a good example of how to “neutralize”Footnote 23 socially and politically divisive conflicts resulting from the EC policies in life sciences. The EGE became an “obligatory passage point” (Callon, 1986) for the EU legislator, since whenever directives had anything to do with values, their opinions had to be explicitly considered (Tallacchini, 2015, p. 165).

Bioethics and the limits of principle-based ethics

The system of traditional bioethics was successful throughout the 1970 and 1980 s in the US, but it started showing its major political weaknesses in the late 1980s, which were reflected in the EU in the 1990s. Despite its steady growing influence since the 1950s until today, in academia, hospitals and biomedical research centers, bioethics has been said to be “in crisis” (Evans, 2012, p. 75; Solomon 2014),Footnote 24 due to the increasing (political) controversies about the legitimacy and authority of the field and its methodological orientations,Footnote 25 to the point of questioning its very existence (Turner, 2009).

Among these criticisms, scholars have noted a problem of “quality control” (Benatar, 2006). In a narrower sense, bioethics could be considered a branch of philosophical inquiry, having to meet the quality standards of academic philosophy. However, bioethics seems to be often interpreted in a much broader sense, with many practices involved under its umbrella, each one with its own terminology and methods. As a result, many people can speak in the name of bioethics but without the need to adhere to any constraint or peer-review process, thus leaving little room to check the soundness of an ethical argument, principle or reasoning. A parallel in the ethics of AI could be found in the confusion between fundamental rights, from a legal perspective, and ethical principles. In the AI HLEG’s guidelines, while the principles are said to be anchored in, and operationalization of, fundamental rights (High-Level Expert Group on Artificial Intelligence, 2019a, p. 10), the reduction of the latter to basic principles such as “prevention of harm” empty them out of their legal and normative meaning, thus leading to a “watering down” (Delacroix & Wagner, 2021, p. 4).

More specifically, due to its short life, in the field of AI ethics there is a lack of proven methods to translate principles into practice (Mittelstadt, 2019, p. 503). While bioethics, throughout the decades, has relied on more robust ways that medicine has developed to translate high-level principles into practical requirements (e.g., professional codes of conduct, ethics review committees or licensing schemes), the same cannot be said for AI yet. There is general agreement on high-level principles in the field (Jobin et al., 2019), but there is no accepted hierarchy of principles in case of conflicting norms as well as little enforcement and accountability mechanisms in place (Delacroix & Wagner, 2021, p. 7). However, several attempts are being made to translate principles into practice (European Commission, 2020; Morley et al., 2021), so one could argue that it is just a matter of time before principlism acquires the same robustness in AI as in the biomedical field.

Nonetheless, there are different problems with principlism as a method in the first place. Principlism, as an expression of analytic philosophy (Callahan, 1982, p. 4), has long been considered too narrow and simplifying compared to other richer approaches (Callahan, 1982). Since principlism became so dominant, groups of scholars who were more interested in debating ends rather than an abstract system of knowledge or “engineering philosophy” where ends were pre-defined, were cut off from the discussion (Evans, 2012, p. 65). All items about metaphysics, world view or the impacts of technologies on society at large became seen as beyond the scope of “mainstream” bioethics (Zwart et al., 2014, p. 6). Despite these criticisms, it is surprising to observe that AI ethics still heavily relies on the principle-based approach of bioethics. In analogy to bioethics, the ethical debate on AI seems largely limited to design and implementation, not on whether AI systems should be built in the first place (Greene et al., 2019, p. 2127). Ethical charters are framed as imperatives and not as discussions about the possibility of not doing anything. This was evident in the work of the AI HLEG, where the section discussed in the public draft of the ethical guidelines on “red lines” that should never be created by AI developers, was removed from the final version of the document (Metzinger, 2019). Many people in the group, it was reported, were not comfortable with the idea that more guidance was needed on distinguishing between what people can do about AI and what people should (not) do with AI systems (High-Level Expert Group on Artificial Intelligence, 2018, p. 11).

Technology assessment and anticipatory governance (1972–1995)

Origins: OTA in the US

Another major approach to understanding and assessing the impact of emerging technologies, Technology Assessment (Tallacchini, 2009, pp. 284–287), developed in the US in parallel with institutionalized bioethics. At least initially, narratives on ethics did not play an explicit role in the establishment of TA as a dominant science governance approach, but the development of institutionalized TA and (bio)ethics are interconnected. In common with bioethics, TA was initially expert-based, providing an alleged neutral and factual input, but afterward, because of criticisms, it broadened up to other stakeholders.Footnote 26 Additionally, it supported traditional approaches to legislation in governing science and technology (Paula, 2008, p. 1) and a form of cost-benefit analysis, to mitigate the costs of technological development and maximize their benefits, in a kind of formal and transparent manner.

The Office of Technology Assessment (OTA) in the US was established by the US Congress in 1972, but its history has deeper origins in the 1960 and 1970s regarding (a) the concerns about the relations between technology and the environment and (b) the tensions between the executive and legislative branches of the US federal government (Bimber & Guston, 1997; Grunwald, 2009, p. 1104; Kunkle 1995, p. 176). As for b), in the light of technological developments, e.g. in aeronautics, Congress members felt they needed more and better technical advice to legislate and choose research and development policies on matters involving scientific and technical complexity, in contrast with the more thorough and wider scientific and technical advice the US president had (Kunkle, 1995, p. 177). As a result, the idea of creating a separate advisory mechanism for the Congress, independent from the executive branch, emerged.

Given the early suspicions and hostility on this vision of TA, e.g. that what it really meant was regulating technology (left-oriented “regulation in disguise” (Kunkle, 1995, p. 180)) or that it would infringe the authority of the Congress regarding policy-making powers, Emilio Daddario (1918–2010), and his successor as chairman John Davis (1916–1992), started emphasizing more the informative and supplementary role of a hypothetical Office of TA for the Congress, which would restore the imbalances of federal powers in the light of the advisory deficiencies of the Congress vs. executive agencies (Kunkle, 1995, p. 182).

In its early years, instead of a proactive channel for public input for a coherent technology policy, with a broad monitoring function, OTA was more of a means to provide technical advice, requested ad hoc, to members of the Congress. Its goal was more related to parity of information among executive and legislative branches rather than to assessing the consequences of technological developments. Outside experts, for instance, were consulted only post hoc, on topics that had already been decided beforehand by the Technology Assessment Board (TAB), i.e. the twelve members body governing the OTA (Kunkle, 1995, p. 188). In this first phase, TA mostly fulfilled an “early warning function”, by providing information on possible future “secondary effects” (i.e. social, cultural, political) of technologies to policymakers. However, it became soon clear that, in practice, impacts of technologies could only be partially foreseen and providing neutral information to policymakers was not possible. Therefore, OTA moved from an early warning function to the development of policy alternatives. OTA earned, until 1995, with this new approach, the reputation of a reliable, informative and “unbiased” source of information for the Congress, producing hundreds of reports on a wide range of topics.Footnote 27

Just like for the case of bioethics, looking at the dynamics originating TA in the US is useful to better understand how it developed separately in Europe and in the EU’s and its Member States’ institutions.

TA in the EU

The OTA was an influential example for many European States that established their own agencies in the mid-1980s (Grunwald, 2019, p. 704; Van Eijndhoven 1997, p. 269).Footnote 28 Among the first ones there were the German Bundestag Technology Assessment Bureau (established in 1989), the Netherlands Office for Technology Assessment (established in 1986; later renamed Rathenau Institute), the French Office Parlementaire d’Evaluation des Choix Scientifiques (established in 1983), the UK Parliamentary Office of Science and Technology (established in 1989) and the Danish Board of Technology (established in 1986).Footnote 29 Together, the European TA institutions founded the European Parliamentary Technology Assessment (EPTA) network in 1990, later joined by other international countries.

The OTA also had an influence at a supra-national level, on the European Community. In 1978 the European Commission approved the Forecasting and Assessing for Science and Technology (FAST, 1978–1983)Footnote 30 to develop a coherent long-term policy on science and technology. The EP later established the Scientific Technology Options Assessment (STOA), originally launched as a pilot project in 1987, to provide objective, comprehensive and independent assessment of science and technology issues to the Parliament.Footnote 31 One of the reasons behind the creation of STOA, just like OTA, was the idea to balance the power between the Parliament and EC on matters related to impacts of technologies (Van Eijndhoven, 1997, p. 273).

In its specific instances of Member States and EU institutions, TA evolved differently from the initial conception of OTA, both for political (e.g. differences in political systems)Footnote 32 and contingent reasons (e.g. budget and capacity).Footnote 33 Two new features of (some) European TA stemmed from the idea that the decision-making process should be broadened. First, a strand of TA called “public” or “participatory” TA (Van Eijndhoven, 1997, p. 278), focused on empowering democracy and promoting a more pluralistic and inclusive approach (Van Eijndhoven, 1997, p. 278; Vig 1992, p. 5). Second, a strand, developed in the Netherlands, called “constructive” TA or CTA (Schot & Rip, 1997), had a more economically driven focus on new technological opportunities (Vig, 1992, p. 5), influencing technology development at early stages, where technologies are considered as evolving in close interaction with social systems, to select those technologies that can maximize social, economic and environmental benefits.

TA and the politicization of (ethical) expertise

Despite some similarities, the challenges faced by institutionalized TA in the US were different from those for bioethics. By the early 1990s, OTA had come to a dilemma: on the one hand, if it claimed more autonomy concerning technology policy, it would have been put down by the Republicans who, in the mid-1990s, controlled both houses of Congress (Kunkle, 1995, p. 193); on the other hand, if its role was only to provide ad hoc advice, it would have been seen as superfluous. OTA eventually closed in 1995, to reduce expenditures and avoid duplicate functions, and the OTA’s functions were relocated to the Library of Congress.Footnote 34

In parallel to the decline of OTA, however, but for different reasons and dynamics, TA in Europe progressively became a minor, marginalized tool in comparison to the “use” of ethics as an instrument to build a European identity or to promote market integration.Footnote 35 TA in the EU became a “residual instrument” for parliaments with a marginal position in the exploration of policy options, with little impact and too little use of results (Smits et al., 1995). Two major problems contributed to this outcome. A political problem was that the EU initiatives on TA had no respectable position due to a lack of political legitimation, little insight into decision-making on technology, few opportunities to gain experience in policy-making and no stable research infrastructure regardless of the quality of the report (Smits et al., 1995, p. 292). A quality problem was that the scale of EU initiatives was too small (with a small budget) and varied, leading to insufficient quality, and therefore not taken seriously at the policy-making level.

In other words, there was a problem with the politicization of TA. In the US, OTA was criticized for being too close to democrats; in the EU, political dynamics made TA a residual instrument in the hands of parliaments. Advisory groups on ethics, sometimes falling under the label of “expert groups”Footnote 36 at the EC, are also vulnerable to being politicized. The politicization of ethical advice takes place when the norms defended for the legitimation of a particular ethical body (e.g. balance, independency or transparency) are eroded by partisan bias (Briggle, 2009, p. 314), although often presented as objective and neutral.

Depending on the composition balance of an ethics group, internal discussions could lead to almost opposite results depending on who was part of such groups (Evans, 2012, p. 95). In the EU, it was noted how ethics has been used as a way to push the economic and political visions of the EC on emerging technologies (Levidow & Carr, 1997, p. 38). In this rule of experts’ system, ethical decisions are claimed to be legitimated based on a “common morality”. In the case of the EU, this “common morality” consists of values enshrined in the EU Charter of Fundamental Rights and EU Treaties, representing European citizens. However, in fact, their legitimation is technocratic, as experts of these groups are not democratically elected, but selected based on their specialized knowledge (Evans, 2012, p. 129; Tallacchini 2009).Footnote 37 In this sense, ethics becomes an exercise of offering “technical” and “value-free” solutions to fixed facts, while addressing problems that were originally considered to involve conflicting values and political interests, such as in cases of Genetically Modified Organisms (GMOs) (Levidow & Carr, 1997; Wynne, 2001, 2005) or biotechnologies (Jasanoff, 2005; Tallacchini, 2015). Such values are then removed from consideration and only a pre-defined subset of them is taken for granted to achieve the most efficient solution.

To reflect on the ways technocracy and politicization are affecting AI ethics, it is necessary to first look at the responses that bioethics and TA, from the 1990s onward, tried to implement to partly address the problem of expertise in the first place.

From ELSI and ELSA to RRI

Ethical, legal and social impacts research programs

It was the problem of expertise that mostly contributed to the crisis of traditional bioethics (Solomon, 2014) and the decline of traditional TA. These dynamics contributed to the emergence of different approaches, mostly motivated by the need to open up ethical discussions and involve more stakeholders, focusing on increased collaboration with experts from different fields (Zwart et al., 2014, p. 6).

In the US, many other (ethical) committees were created to support the work of the National Commissions,Footnote 38 including, later, the Ethical, Legal, and Social Implications (ELSI) Working Group (WG) of the Human Genome Project (HGP),Footnote 39 which became worldwide the basis for a distinctive research approach bearing the same name label. In contrast to expert-based traditional bioethics and TA (Hedgecoe, 2010), ELSI proposed an earlier upstream engagement, trying to “predict” or anticipate new technological developments also through the involvement of interdisciplinary stakeholders in the discussion. The idea was to collectively explore alternative paths to integrate societal concerns in the scientific practice, e.g. by allowing social scientists to engage with scientists in the laboratory.Footnote 40

The turn to ELSA in Europe (1994–2012) and the rise of RRI (2012-present)

ELSI initiatives were successful in proliferating outside the US, constituting a role model for ELSA in the EU and Member States’ research programs. Like the case of bioethics, ELSA in Europe played a different political role than ELSI in the US. ELSA was envisioned to contribute to integrating new scientific and technological development in European society, by legitimizing research and facilitating societal uptake of technological innovation, as a result of e.g. the EU backlash against agri-food technology in the 1990s (Levidow & Carr, 1997; Rodríguez et al., 2013).

ELS- aspects were considered for the first time in the EU in the context of the European Framework Programmes (FP) for Research and Technological Development, i.e. the EU’s main policy instrument to guide research and promote socio-technical integration and support R&D.Footnote 41 After the spread of ELSA in the EU’s FPs, ELSA approaches started appearing in national funding bodies in several Member States since the 1990s, such as the Economic and Social Research Council Genomics Network in the UK or the Centre for Genomics and Society in the Netherlands.Footnote 42

Just like for TA, it is difficult to generalize about one single ELSA approach or object in Europe. The several initiatives that go under the name of ELSA share some general characteristicsFootnote 43 but are also highly heterogeneous in terms of scope, expertise mobilized, and methods used to predict the future and engage stakeholders (Hilgartner et al., 2017, p. 828; Zwart & Nelis 2009, p. 541).

After a “golden decade” of ELSA approach (2002–2012), where different fields underwent a process of “elsification” (Zwart et al., 2014, p. 11), a new concept was introduced in the policy and research discourses of the EC, i.e. RRI,Footnote 44 presented as a groundbreaking change with the ELSA (recent) past (Owen, Pansera, Owen et al., 2021a, b; Owen, Schomberg, Owen et al., 2021a, b). In the period around 2011–2013, there was a growing impression at the EC that elsification of research was inadequate to address ethical aspects of emerging technologies, including AI and robotics. In the words of the main promoters of RRI,Footnote 45 a broader concept of responsibility and a more open design for innovation was needed (Owen, Schomberg, Owen et al., 2021a, b, p. 220).

RRI, compared to ELSA, offered a way not only to evaluate risks and ethical aspects of technologies, but to broaden the research process, promoting the active involvement of several stakeholders (e.g., from industry) that were absent in previous research programs of the EC. Such involvement would help draw up different research agendas to better define and address “grand societal challenges” (Von Schomberg, 2012, p. 15). Additionally, instead of seeing ethical aspects as constraints or restrictions, RRI promoted a positive attitude to ethics, as an enabler of technology development (Von Schomberg, 2012, p. 16). Unlike ELSA, it targeted the whole innovation process and not just at the very end, from research to production and distribution, especially at early stages, in an anticipatory way (Von Schomberg, 2012).Footnote 46 In short, RRI sought to challenge, more ambitiously than ELSA, the seemingly linear and apolitical “technology/market dyad” that had dominated innovation in the previous decades, by re-configuring norms, institutions and political systems that govern innovation processes (Owen, Schomberg, et al., 2021, p. 221).

RRI was introduced to have a substantial role from the FP Horizon 2020 (H2020) onwards, as a top-down decision of the EC, and later established as an academic field and scientific-intellectual movement too.Footnote 47 Today, it is being discussed whether the initial promises and expectations around RRI are being kept (Owen, Schomberg, et al., 2021, p. 223).

ELSA-I, RRI and the limits of public participation

ELSI-A (as well as, later on, RRI in the EU) was developed to counter some of the problems of politicization and expertise of classical bioethics and TA highlighted in Sect. “TA and the politicization of ethical expertise”. While initially also ELSI, in the early 1990s, was meant to provide substantive ethical and social sciences expertise, its mission shifted a decade later, with a new emphasis on public participation, especially in the EU in the aftermath of the mad cow crisis and the rejection of GMOs (Hilgartner et al., 2017, p. 830). The alleged solution to this problem was to promote a two-way public engagement with science, abandoning the “public deficit” model explanation, which assumes that the general public lacked sufficient knowledge and understanding of basic science and technology (Flynn, 2007, p. 10). This lack of knowledge, in turn, would produce hostility towards science and technology, and therefore needs to be overcome to release the full potential benefits of the latter.

This abandonment, however, has been more apparent than real (Wynne, 2005). Some policy actors still misunderstand or ignore their causal role in public mistrust problems, not only because of individual responsibility, but also “inadvertent reproduction of an established set of institutional reflexes and habits which inadvertently create public alienation” (Wynne, 2005, p. 217). Anchored to the old model of framing the public, they lack to acknowledge that the public is a construct, a reflection of the needs of those in power (Wynne, 2005, p. 218). It has been also noted how the effect of participatory efforts was limited, which was said to rarely produce significant benefits because of asymmetries of power (Hilgartner et al., 2017, p. 838), insufficiently deep and democratic participatory exercises (Wynne, 2006) or the risk of producing misleading representations of public preferences (Casiraghi et al., 2021; Hilgartner et al., 2017; Tait, 2009).

Despite the effort made to distinguish RRI from ELSA, so far, it has been pointed out how RRI has had similar problems as ELSA. RRI remains vaguely defined, or at least different visions are not necessarily consistent with each other.Footnote 48 One of the ways to define RRI is precisely the emphasis on public engagement, to associate citizens with the scientific process from the start, since integrating societal actors from different backgrounds is pictured as a way to nurture innovation and help to solve societal challenges. However, integrating societal actors with researchers, policymakers, and innovators is more complex than it seems in RRI discourses (Felt et al., 2016; Rommetveit et al., 2019, p. 89).

Similarly, the AI ethical agenda also (inadvertently?) further reflects problematic commitments. The deficit model comes under a new shape, from a deficit in scientific knowledge to a deficit in trust (Stilgoe & Guston, 2017, p. 862), as noted in the AI HLEG guidelines: “Trustworthiness is a prerequisite for people and societies to develop, deploy and use AI systems. Without AI systems – and the human beings behind them – being demonstrably worthy of trust, unwanted consequences may ensue and their uptake might be hindered, preventing the realisation of the potentially vast social and economic benefits that they can bring” (High-Level Expert Group on Artificial Intelligence, 2019a, pp. 4–5).

The main idea is that a lack of acceptance hinders innovation and blocks economic and competitive benefits, rather than signaling something else. The main concern is to smooth the integration of AI into society, rather than question its very necessity or impact.Footnote 49

A further problem is that participatory exercises have been insufficiently deep and democratic (Wynne, 2006) or bear the risk of producing misleading representations of public preferences (Hilgartner et al., 2017, p. 830). The EU strategy on AI stresses the element of public participation as a salient feature of human-centric AI. As an example, in the AI HLEG guidelines, stakeholder participation is part of the requirement as ‘non-technical methods’ to ensure trustworthy AI (High-Level Expert Group on Artificial Intelligence, 2019a, p. 23). After the publication of the guidelines, the AI HLEG organized a piloting phase in which stakeholders were invited to provide feedback on the AI assessment list included in the guidelines, which resulted in the ALTAI assessment list (High-Level Expert Group on Artificial Intelligence, 2020).

Although these forms of participation should empower different groups and allow them to be heard in the debate on AI ethics, there are reasons to be skeptical about these outcomes (van Dijk et al., 2021, p. 11). First, concerning the piloting phase, only the last part of the document including guiding questions to operationalize the insights of the rest of the document was open to revision and comments, and not the whole document. The idea behind it was that “organizations working on AI” would “test” the assessment list of the ethics guidelines, which mostly included tech companies. Also, it was mentioned that the feedback loop would help to better understand how to implement the assessment list within an organization, thus being a vehicle to spread principles and requirements that were not open for discussion in the first place. By contrast, the first two parts of the guidelines, laying the conceptual foundations, requirements and principles, were not open to consultation. Second, this form of participation was limited to providing feedback, which could (or could not) be considered at the discretion of the EC.

These elements suggest how the ethics described rather empowers EU institutions while disempowering its citizens. Ethics documents often demarcate between “the public”, which needs to be passively surveyed and educated rather than genuinely involved, and stakeholders (e.g. scientists or businesspersons) who actively survey and educate (Greene et al., 2019, p. 2126) and are expert enough to provide valuable feedback. Public participation can be used by EU bodies, in these cases, rather for self-legitimation purposes and as a means of validation, or as a “smokescreen” to distort attention from the excessive involvement of the ICT industry in ethics initiatives (Metzinger, 2019; Wagner, 2018), resulting in too little room for, e.g., civil society to be part of these groups (Stolton, 2018).

Conclusions

To wrap up, the article investigated some origins of the institutionalization of ethics of AI. This article does not aim to be exhaustive, but to highlight some patterns and characteristics that past examples of institutionalized approaches to technology assessment (some of which recall explicitly “ethics”, i.e. bioethics and ELSA/I, others not, i.e. TA and RRI) have in common with the ethics of AI in EU institutions. Bioethics, TA, ELSA-I and RRI have followed one another as institutionalized approaches to assess and govern the impacts of emerging technologies, for different reasons in the US as well as in the EU, especially in the life sciences. These approaches share similarities (including discourses and methods used, the importance given to expertise and/or to public participation) but they have also developed in opposition to their predecessors (e.g. ELSI vs. traditional bioethics, RRI vs. ELSA), to bridge their gaps and address their challenges.

First, the institutionalization of “traditional” bioethics was described, in the US and in the EU. While the origin of bioethics in the US was more related to controversies questioning the authority of scientists, and the role of the federal government in taking control of research ethics, the case of the EU was more related to solving the problem of democratic deficit and building a European identity. Second, the case of TA was presented. The history of TA in the US was inextricably linked with that of the OTA, and its role in mediating the tensions between the executive and legislative branches of the US federal government. In the EU, TA evolved differently from the initial conception of OTA, both for differences in political systems and contingent reasons of budget and capacity. Third, the developments of ELSI/A and RRI were described. ELSA in Europe played a different political role than ELSI in the US, supposed to “predict” or anticipate new technological developments also through the involvement of interdisciplinary stakeholders in the discussion. ELSA was envisioned to contribute to integrating new scientific and technological development in the European society, a role that was later taken up by RRI, that emphasized boosting innovation and addressing the “grand (socio-economic) challenges” of EU society through greater public participation and collaboration with the industry. Finally, the challenges faced by these forms of institutionalized ethics were related to the ones that AI ethics is facing today in the EU. The comparison showed that, despite changing labels and approaches, from bioethics to TA to ELSI/A and RRI, there are some recurring, unresolved challenges in ethics regarding methods employed, vulnerability to politicization and (misuse of) public participation.

From a methodological perspective, the value of principlism, and in general of the “quest” for the best principles for AI ethics, should be questioned more. While the EU ethics claims to be fundamental rights-based and founded on EU values, it too often bluntly refers to bioethical principles, which are rooted in a cost-benefit perspective and risk-based discourses. While the very same idea of operationalizing principles may seem appealing from an engineering perspective, it poses serious challenges to the ways fundamental rights are articulated and operationalized in the EU context. One way to still draw on the knowledge of academic ethics could be to re-qualify the use of “moral philosophy”. Training in moral philosophy can still offer several contributions, such as framing debates and clarifying different positions, evaluating different positions and facilitating cooperation and engagement (Bietti, 2020).

Regarding the politicization of ethics, what is concerning is the exclusion, from ethical debates, of people and groups interested in discussing ends rather than how to develop ethical frameworks and principles to achieve, ex-post, pre-defined ones. Civil society groups are often marginalized in the debates, while they have been very active on the topic of e.g. ethics washingFootnote 50 and the text of the EC’s AI Act.Footnote 51 Political and procedural questions are of crucial importance in institutionalized ethics (Briggle, 2009, p. 321). Despite the recent reforms of EC’s expert groups,Footnote 52 primarily targeted at corporate dominance, there are still shortcomings in the way these groups function, especially regarding the balance of disciplinary backgrounds and political opinions of their members. In any case, deviation from rules to avoid disciplinary and bias imbalance should be enforced with sanctions.

As for public participation, it is too simplistic to simply call for more participation and involvement “of all stakeholders” or a “multi-stakeholder approach” “through the whole process of implementing AI systems” (High-Level Expert Group on Artificial Intelligence, 2019b, p. 37, 2019a, p. 19). There are different understandings, modalities and aims of participatory activities, from restoring trust in scientific institutions to legitimate policy commitments to encourage mutual understandings of different disciplines and interests (Felt et al., 2007, p. 56). Therefore, calls for more stakeholder involvement should be more specific and reflect on how participatory activities are performed, for whom and for what are they pursued, or who is supposed to participate and at which stage.

To conclude, there may be opportunities to change the governance of AI based on specific ethical considerations, but structural and discursive changes in institutionalized ethics are needed to avoid the pitfalls of the US and EU’s recent past. To implement these changes, two different strategies could be followed. On the one hand, if the language of ethics in the public discourse is considered useful, a different approach to this ethics, methodologically and institutionally, would be advisable. Conflicts and different standpoints about the governance of AI can be embraced rather than avoided by seeking consensus at any cost (Mouffe, 2001) in the name of “objective” values held by all EU citizens. On the other hand, a clarification of the use of the term “ethics” in general is needed,Footnote 53 since, having acquired so many different meanings in the documents and discourses analyzed, it has almost become an empty signifier (Mouffe, 2001; van Dijk et al., 2021) that can freely change its meaning (Buell, 2001). In some cases, this could mean a clarification of the philosophical basis (e.g., metaethical assumptions or normative theory) behind ethical claims that are made. In others, a “deflation” of the use of the term could be advisable. Human-centric and trustworthy approaches for AI and public participation initiatives, for example, do not always need to be named “ethical”, which can be misleading and lead to instrumentalization, but they could be robustly conceptualized by referring, for instance, to human rights, democracy and the rule of law.Footnote 54