Keywords

3.1 Beyond Human Rights Impact Assessment

In the previous chapter, we discussed the role of human rights impact assessment (HRIA) in removing or mitigating potential adverse impacts of data-intensive systems based on AI. However, the focus on these possible consequences does not eliminate the risk of other negative social effects concerning the relation between technology and human beings.

Although legal principles, including human rights, embed ethical and societal values, not all these values assume legal relevance. Moreover, the codification of these values in legal principles necessarily embodies them in specific provisions, shaping them in a way that is different from general and abstract ethical or societal values.

There is a sphere of social and ethical issues and values that is not reflected in legal provisions but is relevant in defining a given community’s approach to the use of data-intensive AI systems. If we consider, for example, smart city projects, solving all the issues concerning the impact on rights and freedoms does not exclude questions about the social acceptability of these projects.Footnote 1

Deciding whether we want an urban environment heavily monitored by sensors, where city life is determined by the technocratic vision of the big platforms, and whether we want to give AI the responsibility for deciding student admissions to university or patient admissions to intensive care, are choices that raise big ethical societal questions.

Such questions concern the society we want to see, the way we want to shape human relationships, the role we want to leave to technology and its designers. These ethical and societal questions are very close to those we face when deciding to develop a new medical treatment impacting on individual health, entailing benefits and risks.

The latest wave of AI developments raises ethical and societal concerns about the dehumanisation of society,Footnote 2 the over-reliance on AI,Footnote 3 the value-dependent approach unintentionally or intentionally embedded in some applications, the prevalence of a socio-technical deterministic approach,Footnote 4 the dominant role of big players and their agenda in shaping the digital environment without due democratic process.

These and other similar issues are either not legal questions, or not fully addressed by the existing legal framework. It has meant interest has grown around the potential role of ethical principles, including social values in a broader notion of data ethics.

Nevertheless, from the outset, the debate on data ethics has been characterised by an improper overlap between ethics and law, in particular with regard to human rights. In this sense, it has been suggested that ethical challenges should be addressed by “fostering the development and applications of data science while ensuring the respect of human rights and of the values shaping open, pluralistic and tolerant information societies”.Footnote 5 We can summarise this approach as ‘ethics first’: ethics plays a central role in technology regulation because it is the root of any regulatory approach, the pre-legal humus that is more important than ever where existing rules do not address or only partially address technological challenges.

Another argument in favour of the central role of ethics comes out of what that we might call the ‘ethics after’ approach.Footnote 6 In the concrete application of human rights we necessarily have to balance competing interests. This balance test is not based only on the rights themselves but also on the underlying ethical values, meaning that the human rights framework is largely incomplete without ethics.

Both these approaches are only partially correct. It is true that human rights have their roots in ethics. There is an extensive literature on the relationship between ethics and law, which over the years has been described by various authors as identification, separation, complementation, and interweavement.Footnote 7 Similarly, the influence of ethical values and more in general of societal issues in court decisions and balancing tests is known and has been investigated by various disciplines, including sociology, law & economics and psychology.

Here the point is not to cut off the ethical roots, but to recognise that rights and freedoms flourish on the basis of the shape given them by law provisions and case law. There is no conflict between ethical values and human rights, but the latter represent a specific crystallisation of these values that are circumscribed and contextualised by legal provisions and judicial decisions.

This reflection may lead to a broader discussion of the role of ethics in the legal realm, but this study takes a more pragmatic and concrete approach by reframing the interplay between these two domains within the context of AI and focusing on the regulatory consequences of adopting an approach based on ethics rather than human rights.

The main question should be formulated as follows: what are the consequences of framing the regulatory debate around ethical issues? Four different consequences can be identified: (1) uncertainty, (2) heterogeneity, (3) context dependence, (4) risks of a ‘transplant’ of ethical values.

3.1.1 The Socio-ethical Framework: Uncertainty, Heterogeneity and Context Dependence

As far as uncertainty is concerned, this is due to the improper overlap between law and ethics in ethical guidelines.Footnote 8 While it is true that these two realms are intertwined in various ways, from a regulatory perspective the distinction between ethical imperatives and binding provisions is important. Taking a pragmatic approach, relying on a framework of general ethical values (such as beneficence, non-maleficence, etc.), on codes of conduct and ethical boards is not the same as adopting technological solutions on the basis of binding rules.

This difference is not only due to the different levels of enforcement, but also to the more fundamental problem of uncertainty about specific requirements. Stating that “while many legal obligations reflect ethical principles, adherence to ethical principles goes beyond formal compliance with existing laws”Footnote 9 is not enough to clarify the added value of the proposed ethical principles and their concrete additional regulatory impact.Footnote 10

Given the different levels of binding nature and enforcement, shifting the focus from law to ethics and reformulating legal requirements as ethical duties open the doors to de-regulation and self-regulation. Rather than binding rules, business can therefore benefit from a more flexible framework based on corporate codes of ethics.Footnote 11

This generates uncertainty in the regulatory framework. When ethical guidelines refer to human oversight, safety, privacy, data governance, transparency, diversity, non-discrimination, fairness, and accountability as key principles, they largely refer to legal principles that already have their contextualisation in specific provisions in different fields. The added value of a new generalisation of these legal principles and their concrete applications is unclear and potentially dangerous: product safety and data governance, for instance, should not be perceived as mere ethical duties, but companies need to be aware of their binding nature and related legal consequences.

Moreover, ethical principles are characterised by an inherent heterogeneity due to the different ethical positions taken by philosophers over the centuries. Virtue ethics, deontological or consequentialist approachesFootnote 12 can lead to different conclusions on ethical issues. AI developers or manufacturers might opt for different ethical paradigms (note that those mentioned are limited to the Western tradition only), making harmonised regulation difficult.

Similarly, the context-dependence of ethical values entails their variability depending on the social context or social groups considered, as well as the different ethical traditions.

By contrast, although the universal nature of human rights necessarily entails contextualised application through national laws, which partially create context dependency and can lead to a certain degree of heterogeneity,Footnote 13 human rights seem to provide a more stable framework. The different charters, with their provisions, but also regional courts (such as the European Court of Human Rights), and a coherent legal doctrine based on international experience can all help to reduce this dependence on context.

This does not mean that human rights do not present contextual differences, but compared with ethical values, they are clearer, better defined, and stable. From a regulatory perspective, this facilitates a better harmonisation and reduces the risk of uncertainty.

3.1.2 The Risk of a ‘Transplant’ of Ethical Values

A largely unaddressed issue in the current debate on AI and ethics concerns the methodological approach that we might call the ‘transplant’ of ethical values. This is related to the risk of considering data ethics as a mere extension of ethical principles already existing and applied in other fields.

The experience of the Institutional Review Boards (IRBs) clearly shows the limitations of such an approach. As historical research has shown, the set of values used by IRBs has, for long time, been influenced by a kind of ethical imperialismFootnote 14 in favour of the medical science, following the principles laid down after the Nazi criminal experiments involving human beings and in response to important cases of questionable studies.Footnote 15

The important role of medical ethics in this debate has led regulators to adapt the model created for biomedical research to social science, without considering or underestimating the differences of these fields.Footnote 16 This was not the result of a deliberate intention to impose biomedical ethics on other disciplines, but the consequence of not taking in to account the variety of fields of application.

Biomedical research has a methodology based on hypotheses and testing, which means research goals defied at the outset of data collection and a specific and detailed protocol to achieve them. In addition, biomedical experiments have an impact on the physical and psychological condition of the people involved, whereas social sciences and data processing for social analysis do not necessarily produce these effects.

Finally, physicians have an ethical duty to do no harm and to benefit their patients, whereas social sciences and data analysis may be merely descriptive or, in certain circumstances, may create legitimate harm (e.g. use of data to detect crimes, with consequent harm to offenders in terms of sanction, or algorithms used to decide between alternative solutions involving potential damages, as in the case of industrial/car accidents).

Considering these differences, the 1960s debate on extending biomedical ethics to social sciences,Footnote 17 and the extensive use of AI systems for social analysis, the experience of IRBs thus provides an important warning in framing the debate on ethical assessment of AI, highlighting the consequences of a ‘transplant’ of ethical values.

In addition, the current ethical debate may render this transplant obscure, as many ethical guidelines and charters do not explain which ethical approach has been or should be considered, even in relation to general ethical frameworks. Deontological ethics, utilitarian ethics, virtue ethics, for example, are just some of the different possible ways of framing ethical discourse, but they imply different perspectives in setting guidelines on the moral implications of human actions.

To investigate whether these potential risks associated to the circulation of ethical models are present in the data ethics debate, an empirical analysis is required, focusing on the ethical guidelines for AI proposed and adopted by various organisations. To this end, we can benefit from several studies carried out to identify the key values of these guidelines.Footnote 18

Although these studies suffer from certain limitations – the use of grey literature, search engines for content selection, linguistic biases, and a quantitative text-based approach that underestimates the policy perspective and contextual analysisFootnote 19 – they do provide an overview of the operational dimension of data ethics.

Based on this evidence, we can see that there is a small core of values that are present in most documents.Footnote 20 Five of them are ethical values with a strong legal implementation (transparency, responsibility, privacy, freedom and autonomy) and only two come from the ethical discourse (non-maleficence and beneficence).

Another studyFootnote 21 identified several guiding values and the top nine, with a frequency of 50% or more, are: privacy protection; fairness, non-discrimination and justice; accountability; transparency and openness; safety and cybersecurity; common good, sustainability and well-being; human oversight, control and auditing; solidarity, inclusion and social cohesion; explainability and interpretability. As in the previous study, the aggregating of these principles is necessarily influenced by the categories used by the authors to reduce the variety of principles. In this case, if we exclude values with a legal implementation, the key ethical values are limited to openness, the common good, well-being and solidarity.

If we take a qualitative approach, restricting the analysis to the document adopted by the main European organisations and to those documents with a general and non-sectoral perspective,Footnote 22 we can better identify the key values that are most popular among rule makers.

Considering the four core principlesFootnote 23 identified by the High-Level Expert Group on Artificial Intelligence (HLEGAI),Footnote 24 respect for human autonomy and fairness are widely developed legal principles in the field of human rights and law in general, while explicability is more a technical requirement than a principle. Regarding the seven requirementsFootnote 25 identified by the HLEGAI on the basis of these principles, human agency and oversight are further specified as respect for fundamental rights, informed autonomous decisions, the right not to be subject to purely automated decisions, and adoption of oversight mechanisms. These are all requirements already present in the law in various forms, especially with regard to data processing. The same applies to the remaining requirements (technical robustness and safety, privacy and data governance; transparency; diversity, non-discrimination and fairness; accountability; and environmental well-being).

Looking at the entire set of values provided by the HLEGAI, the only two elements – as framed in the document – that are partially considered by the law are the principle of harm prevention – where “harms can be individual or collective, and can include intangible harm to social, cultural and political environments” – and the broad requirement of societal wellbeing, which generally requires a social impact assessment.

Another important EU document identifies nine core ethical principles and democratic prerequisites.Footnote 26 Amongst them, four have a broader content that goes beyond the legal context (human dignity, autonomy, solidarity and sustainability). However, in the field of law and technology, human dignity and autonomy are two key values widely considered both in the human rights framework and in specific legal instruments.

Based on the results of these different analytical methodologies (quantitative, qualitative), we can identify three main groups of values that expand the legal framework. The first consists of broad principles derived from ethical and sociological theory (common good, well-being, solidarity). These principles can play a crucial role in addressing societal issues concerning the use of AI, but their broad nature might be a limitation if they are not properly investigated and contextualised.

A second group includes the principle of non-maleficence, the principle of beneficence,Footnote 27 and the related broader notion of harm prevention (harm to social, cultural, and political environments). These are not new and undefined principles, especially in the field of applied ethics and research and medical ethics. They can play an important role in AI, but we should therefore consider the potential risk of the ‘transplant’ of ethical values, discussed above.Footnote 28

The last group, which includes openness, explicability and sustainability, seems already partially integrated in legal provisions although a context-specific application of these principles in the field of AI is possible and desirable. However, these values are more closely related to a technical implementation, via specific standards or procedures to ensure their application.

This analysis of the empirical evidence on ethical values that should underpin AI-based systems shows a limited risk of ‘transplant’ of ethical values and, where ethical values are correctly framed, the ability to avoid improper overlap between ethical and legal realms and values.

It is therefore crucial to properly consider the social and ethical consequences of these systems as a complementary additional analysis to the human rights impact assessment, avoiding any confusion between these different layers. The ethical and societal dimensions should be included in the models and process adopted in AI design and development, to capture the holistic definition of the relationship between humans and machines.

At the same time, we should be aware that these general values, such as the common good, well-being, solidarity, are highly context based, more than human rights. This local and community dimension should therefore be considered in referring to them and properly framed.

3.1.3 Embedding Ethical and Societal Values

Assuming the existence of socio-ethical values that could or should guide us in defining the relationship between AI systems and potentially impacted individuals and groups, we need to ask whether and how these values can become part of these systems.

Looking at the evolution of the relationship between ethics and technology,Footnote 29 the original external standpoint adopted by philosophers, which viewed technology as an autonomous phenomenon with its own potentially negative impacts on society, has been progressively replaced by a greater focus on sector-specific forms of technology.

This enables us to go beyond a formal separation between ethics and technology, where the first merely concerns the social dimension without concurring in co-shaping technology itself.Footnote 30 This approach is evident, for instance, in technological mediation theoryFootnote 31 which highlights the active role of technology in mediating between humans and reality as well as between individuals.

Recognising technology’s active role in co-shaping human experienceFootnote 32 necessarily leads us to consider how technology plays this role and to focus on the values underpinning technological artefacts and how they are transposed in society through technology.Footnote 33 At the same time, it is important not to describe this dynamic in terms of mere human-machine interaction, considering the machine as something given without recognising the role played by designers,Footnote 34 and their values, in defining the ethical orientation of these artefacts.

Against this background, technology cannot be considered as neutral, but as the result of both the values – intentionally or unintentionally – embedded in devices/services and the role of mediation played by the different technologies and their applications.Footnote 35 These considerations, applied to data-intensive systems, confirm the central role of value assessment and design supporting the idea of ethical assessment and the ethical by-design approach to product/service development.

While the ethical debate focuses on the moral agency of technological artefactsFootnote 36 and the role of human-technological interaction, legal analysis focuses more closely on individual responsibility in a broader sense. Indeed, the theoretical framework of strict liability and vicarious liability encourages us to look beyond the human-technology interaction and consider a tripartite relationship between, humans (users), technology (artefacts) and those humans in a position to decide which values are embedded in the technology (designers, in general terms).

From this perspective, the issues concerning the new machine age are not circumscribed by the morality of these machines but involve the role of both the designer and the users who can shape or co-shape the ethical values embedded in the machine and transmitted through technology.

This awareness of the role of designers and users is also present in the studies on technology mediation which recommend risk assessment.Footnote 37 Moreover, AI applications and ML processes highlight how the technology has not reached a level of maturity to justify labelling AI applications as moral agents and, at the same time, how users play an active role in the applications’ learning phases.Footnote 38

For these reasons, it is important to address the general responsibility for decision-making choices in AI design, remembering that these choices contain three separate components: technological feasibility, legal compliance, and socio-ethical acceptability. Thus, we avoid the simplistic conclusion that feasibility is the only driver of tech development: not everything that is feasible is also legal, and not everything that is both feasible and legal is also acceptable from an ethical and social standpoint.

As discussed above, the legislation does not cover ethical and social issues. These are either unexplored by law or irrelevant or neutral from a legal perspective (e.g., alternative policy solutions such as predictive crime prevention policing or social aid plans) and therefore outside its sphere of action. Moreover, ethical and social values are not the mere projection of individual ideas but the result of a given cultural context.

In addition, ethical values should be carefully embedded in technology, bearing in mind the known difficulties in ethically assessing unforeseen applicationsFootnote 39 and the potential conflict between the ethical values embedded in AI systems and freedom of choice, both at collective and individual level.Footnote 40

In addressing these issues several approaches can be taken to recognise the importance of an ethical and societal assessment of data-intensive systems based on AI, the most frequently adopted being ethical guidelines, questionnaires, and the appointment of ethics committees.

The first two options – ethical guidelines and questionnaires – retrace the steps made in the legal realm in the socio-ethical context. Guidelines add complementary ethical provisions to the existing legal requirements, and, similarly, additional questions or sections on ethics and social issues are introduced in impact assessment models. However, both these approaches have their limitations.

As discussed in the previous section with regard to the role of ethics, guidelines can be affected by uncertainty and heterogeneity, due to an improper interplay between ethics and human rights and the variety of possible ethical approaches. In addition, ethical guidelines may reflect corporate values or approaches and, more in general, values defined outside a participatory process that reflects societal beliefs.

Regarding the use of questionnaires to embed ethical and societal values in AI system design, this option may more clearly emphasise the contextual component of the sociotechnical dimension, but again there are constraints.

First, questionnaires often contain only vague and limited questions about societal and ethical issues, in assessment models that favour other concerns, such as legal ones.

Second, criticisms of the value-oriented approaches adopted by corporations can be equally made of the way the questions are worded and the areas they investigate.

But the biggest problem with the use of questionnaires is their mediated nature. While the human rights section of an HRIA questionnaire refers to an existing legal framework and its implementation in a given case, here this framework is absent.

Before assessing how ethical and social values are embedded in AI solutions, we need therefore to define these values, which are not specified in the provisions or in case law and vary much more widely than legal values.

As such, questions about existing ethical and social values are unable to guide the user of the questionnaire in a comparison between the As-Is and To-Be, since the second element in this equation is not defined. In the end, these questions require experts or a community able to interpret them on the basis of an understanding and familiarity with the given cultural context in which the data-intensive system will be developed and deployed.

Questionnaires on ethical and societal values underpinning AI systems are therefore important, but more a means than an end. They require panels of experts and stakeholder engagement to provide proper feedback to guide the implementation of those values in the system.

It is these expert panels and stakeholders’ participation that represent the true core of the process of embedding ethical and societal values in AI systems design.

3.1.4 The Role of the Committee of Experts: Corporate Case Studies

The potential role of expert panels has been recognised by companies involved in AI development over the last few years, with the creation of committees of experts to give advice on the challenges associated with the use of data-intensive AI systems.

These panels are frequently known as ethical boards and share some common features which can help us to determine if this is the most adequate and effective way to deploy experts in AI design and development.

A well-documented case study is the Facebook Oversight Board, created by Facebook “to promote free expression by making principled, independent decisions regarding content on Facebook and Instagram and by issuing recommendations on the relevant Facebook Company Content Policy”.Footnote 41 To achieve this goal the Oversight Board reviews a select number of “highly emblematic cases”, determines whether decisions were made in accordance with Facebook’s stated values and policies and issues binding decisions.Footnote 42

Although Facebook moderates three million posts every dayFootnote 43 and in 2020 its Oversight Board reviewed only seven cases (representing a rate of 0.00023%) and 16 in 2021, it has been pointed out that even a limited but well selected sample of cases can significantly contribute to reshaping the core features of the service.Footnote 44 However, this conclusion entails two shortcomings that we should consider.

First, a supervisory body created by a company selects emblematic cases specifically to highlight weaknesses in the company’s services/products. This happened in the case of the Oversight Board, where decided cases addressed the core issues of transparency of content moderation and criteria, the quantity of resources available for content moderation, harmonisation of company self-regulation standards, the role of human intervention, and the accuracy of automated content moderation systems.

Given that the main outcome of these decisions is generalised, concerning the way the company shapes its product/service, rather than on decided cases, a question arises: is an Oversight Board necessary or could the same result be achieved through an auditing process? In this case, possible issues with transparency, harmonisation, etc. could be spotted and analysed by truly independentFootnote 45 auditorsFootnote 46 reviewing the content moderation decision-making process, without necessarily adopting a case-specific approach.

Second, in its analysis, the Facebook Oversight Board performs a kind of conformity assessment. By evaluating the application of Facebook’s self-regulationFootnote 47 in each case – albeit within the human rights framework – the Board does not consider the overall and highly debated impact of the social network and its policies.Footnote 48 This limitation is even more significant as the Board’s remit is limited to removed content and does not cover the entire service (behavioural advertising, personal data exploitation etc.).

On this basis, it is hard to see the Oversight Board as a model for a committee that can contribute to embedding societal and ethical values in the design of AI systems. The Oversight Board does not focus on the overall impact of the application, but considers it and the adopted technologies as given. The Board only assesses their functioning and how to improve some procedural aspects, without questioning the design of the AI-based social network and its broader overall impact on individuals and society.

Compared with the Facebook Oversight Board, the case of the Axon AI Ethics Board is more closely focused on product/service design. While the Oversight Board examines only a narrow part of Facebook’s products/services, reviewing content moderation in contentious cases, the Axon Ethics Board’s mission is “to provide expert guidance to Axon on the development of its AI products and services, paying particular attention to its impact on communities”.Footnote 49

Axon’s Board was set up to give advice on specific products/services and in particular on their design while they are in the development phase.Footnote 50 More specifically, with regard to AI, the company is committed to providing the board with meaningful information about the logic involved in building its algorithms, the data on which the models are trained, and the inputs used, explaining the measures taken to mitigate adverse effects, such as bias and misuse.

This is in line with Axon’s Product Evaluation Framework,Footnote 51 a risk assessment focusing on key aspects to be considered in evaluating the social benefits and costs during product development.Footnote 52 Here the main concerns, given Axon’s area of operation,Footnote 53 are technology misuse, criminalisation of persons, personal data processing, potential biases and transparency, but it also includes larger categories (“violation of constitutional or other legal rights” and “potential social costs”) which can broaden the analysis. The self-assessment is performed by the company during the developmental phase and then reviewed by the Ethics Board.Footnote 54

The interaction between the company and the Board, based on the latter’s recommendationsFootnote 55 and the way these are addressed by the company is only partially documented in the company’s reports on Ethics Board activity. While this is a limit in terms of transparency, the information presented does demonstrate a dialogue between the Board and the company on single technology applications, and a partial acceptance of the Board’s recommendations by the company which has introduced changes in the products/services.Footnote 56

A key issue in this regard concerns full access to information about products and services, as Board members are not part of the company. In this case the members of the Ethics Board signed a specific non-disclosure agreement (NDA), including trade secrets, proprietary information, and information about in-development products. The use of NDAs, which does not hamper the activity of the Ethics Board and facilitates dialogue with the company, raises concerns about effective interaction with potentially affected communities and stakeholders.Footnote 57

It is worth noting that Axon has also designated two ombudspersons (a designated Axon employee who is a member of the Ethics Board and sits outside of the internal chain of command and a non-company member of the Ethics Board) who can be contacted by employees who have concerns about the implementation of the Product Evaluation Framework in specific cases, and the Board is available to hear those concerns without fear of attribution.

As with the Facebook’s Oversight Board, Axon’s Ethics Board does not take a participatory approach as such giving voice to potentially affected communities and groups.Footnote 58

While the decisions of Facebook’s Oversight Board are binding for the company – though limited to content moderation issues –, Axon’s Ethics Board can only provide recommendations, like the Oversight Board with regard to Facebook’s Content Policy. This implies that business interests can easily override any ethical concerns expressed by the Ethics Board.Footnote 59

Despite the differences described, both Facebook and Axon created ethical boards which have had an impact on product/service design or use, and have documented this impact in their reports or decisions.

In a second group of cases, companies have set up ethical boards, but the concrete effect of their work is either unclear or, at any rate not made properly public.

This is the case of the AI Ethics Advisory Board set up in 2021 by Arena Analytics (predictive analytics and machine learning for the hiring process), bringing together experts from academia, technology, human resources, and ethics. The focus of this board is developing guidance “to help Arena manage competing ethical obligations”.Footnote 60 However, the concrete outcome and impact on the business model or product/service is not documented, nor are the procedures involved in board selection and its work.

Similarly, SAP (enterprise application software) set up an AI Ethics Advisory PanelFootnote 61 of academics, policy experts and industry experts, to advise the company on the development and operationalisation of its AI guiding principles. This external body interacts with an internal AI Ethics Steering Committee which consists of company executives “from all board areas with supervision of topics that are relevant to guiding and implementing AI Ethics” and advises company teams on how specific use cases are affected by these principles.

The interesting aspect of the SAP model is the presence of an internal unit focused on ethical issues (the AI Ethics Steering Committee), which includes the figure of Chief Ethics Officer with an expanded role through the wider participation of all executives dealing with ethical issues. It is worth noting that the broader AI Ethics Steering Committee should not be considered as an alternative to the Chief Ethics Officer, since the function-based AI Ethics Steering Committee, centred on the executive position in a given area, is not necessarily related to an ethical remit. A Chief Ethics Officer could therefore be of help in internally raising and managing ethical issues. The role could be also played by an external body, such as SAP’s AI Ethics Advisory Panel, but in the SAP case this panel seems not to have this function, providing the company with more general advice on the development and operationalisation of the company’s guiding ethical principles, rather than case-specific advice.

A different approach is adopted by Salesforce in appointing an internal Chief Ethical and Humane Use Officer and an external Advisory Council to the Office of Ethical and Humane Use of Technology composed of “a diverse group of frontline and executive employees – as well as academics, industry experts, and society leaders from Harvard, Carnegie Mellon, Stanford, and more”.Footnote 62 In this case, the positive presence of a dedicated officer with a specific background is compromised by the lack of information available on the identity of the members of external ethics body.

In all three cases, the lack of information on the workings of these bodies or documentation of their work necessarily limits our evaluation of their effectiveness in implementing ethical values in the companies’ practices and products/services.

A third approach by corporations is the setting up of ethics boards without specific information on concrete objectives, compositions or procedures. This is the case of IBM’s internal AI Ethics Board, which “is comprised of a cross-disciplinary team of senior IBMers, co-chaired by IBM’s Chief Privacy Officer and AI Ethics Global Leader, and reports to the highest levels of the company”, but the list of its members is not publicly available, nor is its concrete impact on the company’s strategy.Footnote 63

Summing up these case studies involving some of the major AI players, we can group the ethics boards into three categories. A first group of boards play an active role in the companies’ business, have appointed members whose identity is known, put into place internal procedures, defined tasks and the firm’s commitment to take into account the boards’ inputs. In a second group, the boards’ tasks and members are clear, but the concrete interaction and impact on company decisions is not documented. Finally, there is a third group where the identity of the board members is unknown and there is only a general description of the board’s main purpose.

Such empirical evidence allows us to make some general considerations:

  1. (i)

    Corporate AI ethics boards demonstrate a variety of structures, including internal and external bodies.

  2. (ii)

    They also show a variety of remits, providing general advice and guidelines, product/service advice, usage policies,Footnote 64 self-assessment ethics questionnaires,Footnote 65 and in some cases more indefinite tasks.

  3. (iii)

    The independence and high-profile reputation of the board members is crucial.

  4. (iv)

    Greater transparency about the structure and the functioning (internal procedures) of these bodies is required, including their impact on companies’ decision-making processes.

  5. (v)

    Their effectiveness may be affected by decisions regarding staffing and access to information.

  6. (vi)

    These bodies can be coupled with external ombudspersons/advisory councils.

  7. (vii)

    Internal requests to ethical boards regarding critical issues/cases play an important role.

  8. (viii)

    Accountability should be fostered with regard to company decisions on the basis of the boards’ recommendations or instructions.

  9. (ix)

    Only in limited cases and concerning users’ interests/rights (see e.g., Facebook) are the decisions of these boards mandatory for the company.

  10. (x)

    While the guiding values of these boards often refer to human rights and fundamental freedoms, companies commonly specify the principles and corporate values that drive the boards’ decisions.

We can therefore conclude that there is no uniform model for corporate ethics boards in AI systems, but a wide range of solutions. Nevertheless, the various shortcomings highlighted in the case studies can help us to identify the core elements required for general AI ethical oversight: independence and reputation of the board, values-orientation, effectiveness, transparency, and accountability.

3.2 Existing Models in Medical Ethics and Research Committees

Medical ethics represents an interesting field for a comparative analysis with the challenges of AI, as clinical situations often face conflicts of values, where none of the proposed alternatives is entirely free of problems, even if they are not actually against the law.

For this reason, medical ethics was the first field in which the ethical review process was adopted following abuses, conflicting interests, and human rights violations.Footnote 66 In order to address the variety of medical activities (healthcare practice, research, drug production) various types of ethical committee have been set up, each with a different focus, nature and goal: (i) Clinical Ethics Committees (Healthcare/Hospital Ethics Committees); (ii) Research Ethics Committees (Institutional Review Boards or IRBs in the US); (iii) Ethics committees for clinical trials.

Some of these committees are specifically regulated by law and a legal requirement to carry out certain activities. This is the case with Ethics committees for clinical trials and Research Ethics Committees in certain fields, while the Clinical Ethics Committees, are often created on a voluntary basis, though in some cases regulated by law.

This section does not set out to investigate the regulatory framework, origin, or underpinning values of these committees, but their operational models. This is why these cases, often with a long history and consolidated structure, can help us see how to embed ethical and societal values through similar committees in the AI sector. What is more, awareness of the strengths and weakness of these models will prevent their mere transpositionFootnote 67 to AI, taking the most valuable elements of the existing models to facilitate better expert committee design.

3.2.1 Clinical Ethics Committees

Given the lack of a specific regulation in many cases and their voluntary nature, Clinical Ethics Committees (CECs) might be an option to consider as a potential model for expert committees in AI, in the absence of legal requirements in this respect.

CECs, also known as Hospital Ethics Committees, are part of the larger category of clinical ethics support servicesFootnote 68 for healthcare professionals or patients. They first appeared in the late 1970s and have spread widely across the globe with a variety of structures and functions.Footnote 69

Their main tasks are to: (i) address ethical issues relating to medical practice (reactive role); (ii) perform an educational function with field training based on discussions during CEC meetings (proactive role); (iii) review institutional policies.Footnote 70 Meanwhile their crucial achievements are to give voice to the different actors involved in clinical practice on ethical questions (physicians, patients, patients’ families), foster a multidisciplinary approach, and raise awareness on actual practices and related ethical issues.

They may be made up in different ways, an ethicist model centred on an individual ethics expert,Footnote 71 multi-disciplinary committees or small sub-groups of a larger ethics committee.Footnote 72 They may adopt a top-down or a bottom-up approach,Footnote 73 either emphasising the expert advisory roleFootnote 74 or assisting healthcare personnel in handling ethical questions in day-to-day clinical practice.Footnote 75 Rights-holder and stakeholder involvement may also vary from one model to another, including a complete absence of involvement.

The main challenges they face are (i) risk of outsourcing clinicians’ decisions and responsibilities to these committees, (ii) limited effective patient participation in a model that should be patient-centred,Footnote 76 and (iii) lack of adequate financial resources and organisational commitment.Footnote 77

A possible alternative is Moral Case DeliberationFootnote 78 where all those involved in an ethical decision meet to discuss and think through the moral aspects of a particular patient case with the support of an external figure who can facilitate dialogue without having the authority to decide or suggest/recommend a certain course of action.Footnote 79 The absence of an ethical decision-maker (ethical committees or advisor) here avoids the danger of those involved putting the decision out to an ad hoc body, thereby diminishing their responsibility and engagement.Footnote 80

The solution adopted by CECs is generally to implement a deliberative model, traditionally seen as the best way to deal with applied ethics issues. The deliberative process fosters dialogue within the committees, giving voice to a plurality of views and stakeholders, and encourages a striving for consensus, the primary goal of these committees.

3.2.2 Research Ethics Committees

Compared with CECs, Research Ethics Committees (Institutional Review Boards or IRBs in the US) have a longer tradition rooted in five principal documents: the Nuremberg Code (1947), the Declaration of Helsinki (1964–2013), the Belmont Report (1978), the Oviedo Convention (1996),Footnote 81 and the Universal Declaration on Bioethics and Human Rights (2005).

They therefore have a more regulated composition and function, and give us a clearer picture of the role this model can play in the context of AI.

In addition, ethics committees, which originated in medical research, have been progressively extended to the social sciences, which is a crucial factor in assessing their value for AI. Many AI applications operate in the field of medicine, but many more concern social issues and relationships.

Social science differs from medical research as regards ethical questions, in that the latter is founded on (i) the researchers’ greater knowledge of the problem than the participants (ii) a scientific method involving protocols and experiments and (iii) the duty of no-harm.Footnote 82 It is therefore impossible to simply transplant medical ethics to social science.

The existing case-history of medical ethics should therefore be examined carefully before taking this area as a model for data ethics and related practices,Footnote 83 or for the functioning of ethics boards. On the other hand, the experience of research committees, which address a variety of ethical issues not necessarily related to medical ethics, may serve to point up some valuable approaches for AI.

Research Ethics Committees (RECs) may have a local, regional or national remit, but the ethical assessment of a research project is almost always performed at a local or, in some cases, regional level.Footnote 84 Local RECs at hospitals, universities and research centres, or regional RECs can therefore be viewed as a possible model for AI expert committees.

However, the variety of approaches seen in ethical practice in non-medical sectorsFootnote 85 makes it difficult to define a uniform assessment model, if not in very general terms.Footnote 86 This conforms with the scope of this section to focus on the main actors (committees) and their operations, rather than on a general assessment model. It is assumed that these bodies – correctly created and working – will be in the best position to design the most suitable models for each sector-specific AI application.

The members of these committees are usually appointed by the entity they serve (university, hospital,Footnote 87 research centre) or by government bodies in the case of national and regional RECs.Footnote 88 The composition of these committees varies, but they may include different types of members based on their qualifications (ethics experts, legal experts, sector specific-experts, stakeholders’ representatives) and use different selection criteria (internal experts, external experts, laypersons).

The varying mix of qualification and appointment criteria will necessarily impact on the committee’s behaviour, favouring either the ethical or the technical component, internal origin or external oversight, etc. As in the case of private companies’ ethics boards, the composition and selection play a crucial role in the workings and expected decisions of these bodies.

The operational approach and internal organisation of these committees also vary widely, although a number of common elements can be found: use of self-assessment questionnaires or forms to gather information from the project applicants about their proposals, regular meetings of the RECs, appointment of one or more rapporteurs for a pre-examination of the cases, adoption of deliberation methods based either on consensus or majority voting and, where necessary, interaction with applicants.

An interesting case study in the area of research committees concerns the ethics committees set up by the ERC Executive Agency (ERCEA) to assess ethical issues relating to EU-fundedFootnote 89 frontier research projects in scientific excellence. The transnational composition of the ethics panels and the majority of the projects, and the wide variety of topics addressed (ERC grants cover both hard science and humanities) make this case relevant to the assessment of the impact of AI on societal issues. AI applications are developed in a variety of fields and often deployed or adopted in different countries, raising questions as to the consistency of assessment between one country and another.

The ethical issues monitored for funded research projects concern eleven areas: (i) use of human embryos/foetuses; (ii) involvement of human beings; (iii) use of human cells/tissues; (iv) protection of personal data; (v) use of animals; (vi) non-EU countries; (vii) environment, health, and safety; (ix) dual use; (x) non-military use; (xi) misuse.Footnote 90

Several of these areas are regulated by law at EU, international and national level, including: clinical trials;Footnote 91 human genetic material and biological samples;Footnote 92 animal experimentation;Footnote 93 data protection;Footnote 94 developing countries and politically sensitive issues;Footnote 95 environment protection and safetyFootnote 96 dual use in the context of security/dissemination.Footnote 97 The presence of regulated areas in ethical screening reveals the hybrid nature of this process and the overlap between legal and ethical assessment when ethical principles are codified in law and further interpreted by the courts or other authorities (e.g. data protection authorities).

Another important aspect of ethical assessment concerns its autonomy with regard to the scientific evaluation, as the scientific quality of the projects and their conformity with ethical values is assessed by different and independent panels. Since the ethical assessment follows the scientific one, both the funding body and the applicant have an incentive to reach a compromise to avoid a promising project being rejected and the funding blocked. However, the mandatory ethical assessment and the need for a positive outcome should encourage research teams to take into account ethical issues from the earliest stages of project design to avoid this danger.

An ethical assessment may have four different outcomes. Aside from the worst-case scenario in which the project is rejected on ethical grounds, the other three possibilities are: (i) ethics clearance (the project is approved with no ethical requirements); (ii) conditional ethics clearance (the applicant must satisfy certain ethical requirements before starting the project or during project development); (iii) a further ethics assessment (the project presents major ethical issues that must be separately assessed by an ad hoc ethics committee).

Where appropriate, an ethical review may recommend a complementary ethics check/audit on specific issues at a certain point in project development (e.g., fieldwork involving human beings), which may also result in a request for a new ethical assessment.

In the third case of a further ethics assessment, an ad hoc panel makes an in-depth analysis of the proposal with additional information provided by the applicant in response to the ethics screening report. The result is an assessment report which may either reach a positive conclusion (full or conditional ethics clearance) or decide the ethical issues have not been fully addressed and demand a further ethics assessment, as happens with very complex and sensitive projects. In the latter case, the further assessment may also include an interview with the applicant and, if appropriate, with competent officers of the hosting research institution (e.g., members of the REC of the hosting institution, legal advisors, data protection officer, etc.).Footnote 98

Although the entire assessment process is centred on an ethics panel, an important role is also played by ERCEA ethics officers who carry out a pre-screening of the proposal and flag any issues not highlighted by the applicants in their mandatory ethics self-assessment. In addition, the ethics officers oversee compliance with the requirements set out in the ethics reports, including any checks and audits.

This continuous monitoring (ethics checks, audits, further assessments, oversight of compliance) is a distinctive feature of the case study, whereas RECs do not usually carry out any follow-ups to the assessments they perform before the project begins.

Another distinctive element of the ERCEA case, compared with either corporate or research committees, is the non-permanent nature of the ethics committees where a group of experts is appointed on case-by-case basis for each round of assessment. Each round of ethical screening usually involves several projects together (few or only one project in the case of an ethics assessment or further assessment of the most challenging projects), and ethics officers play an important role in selecting the panel to match the experts’ profiles to the issues emerging from pre-screening.

In thinking of ERCEA ethics committees as a model for AI expert committees, there are a number of elements to be considered. The first concerns the importance of legal principles and requirements which largely transform the assessment into an evaluation of compliance, driven not by ethical values, but by their contextualisation in specific provisions.

ERCEA assessment is a hybrid, only partially grounded on ethics, highlighting the interplay between law and ethics described above as crucial in defining the different operational areas of AI assessment.

Other important factors to consider are the two aspects of the nature and activity of ERCEA ethics committees: the internal dynamics of these expert panels and the interaction between the scientific and the ethical assessment.

The first aspect, the mixed nature of the assessment – including legal compliance requirements – raises problems for interdisciplinary dialogue within the committees between those appointed for their particular expertise in regulated sectors (e.g. data protection) and those primarily involved with ethical issues. In some cases, the latter may see the legal provisions as a constraint on a broader ethical evaluation which may lead them to take positions in contrast to those of the legislator or jurisprudence.

Another important aspect of the interaction among experts concerns the risk of identification between the expert and the applicant where both operate in the same field. Peers may set softer requirements for projects deeply rooted in a shared field of work and demonstrate an unconscious bias in favour of research that pursues the same goals as their interests. This may also lead experts to overestimate the importance of the field of application or underestimate the negative impacts from different perspectives covered by the assessment.

Finally, three external elements may affect the internal dynamics of the expert panel: the workload, the role of the ethics officers and the training. When the workflow is high, this can compromise the committee’s performance, reducing the time available for discussion and undervaluing criticisms in favour of race-to-the-bottom solutions. Here the ethics officers – though formally neutral – can play an important part in mediating the discussion and facilitating interaction between experts, as well as setting the agenda and deciding the time slots devoted to each proposal.

Although the ethics officers are not formally involved in the discussion and its outcome, it is evident that they can play a role in terms of moral suasion for a smoother interaction among the experts and the achievement of consensus, which is the deliberation criteria adopted by ERCEA panels. It should be noted that the consensus criterion is undoubtedly a further factor encouraging the experts to collaborate proactively in solving their differences as the evaluation cannot conclude without reaching a general agreement. On the contrary, adopting a more rapid process based on majority voting will result in a less cooperative attitude.

For these reasons, the training of the experts is an important element for a better understanding of the specific goal of the evaluation and management of the interdisciplinary nature of the committee.

Another important aspect of the nature and activity of the ERCEA ethics committees concerns the interplay between the scientific and the ethical assessment, characteristic of cases of applied ethics concerning research and innovation. As often happens with RECs,Footnote 99 the scientific assessment of the project is distinct from the ethical assessment, the latter following the project’s selection for funding.

This sequence (application, scientific evaluation, project selection, ethical assessment, funding grant) is intended to provide rapid feedback to applicants on the success of their proposals, considering the high impact this has on applicants’ careers and the limited number of cases with unaddressed ethical issues requiring an assessment prior to funding. However, this inevitably leads applicants to focus on the scientific side and view ethical issues as less critical and often as a ‘last mile’ problem. This may also partially affect the ethical assessment by the panel which is responsible for blocking an innovative project on ethical grounds, in a funding model centred on scientific excellence and innovation.

Special training for researchers might raise their awareness of the ethical by-design approach to project planning, avoiding the situation of a poorly executed self-assessment due to a limited understanding of the ethical issues. It is worth noting the similarities with the dynamics seen in private sector AI development, where ethical questions are frequently considered as an add-on to the core of the application and relegated to a final-step assessment of conformity.

3.2.3 Ethics Committees for Clinical Trials

Ethics committees play a critical role in clinical trials, the main area in which ethical issues have been raised in regard to safeguarding the human dignity, human rights, safety and self-determination of research participants.

The field is therefore highly regulated at international,Footnote 100 regional and national level. In the EU, the legal framework was previously established by Directive 2001/20/EC on the implementation of good clinical practice in the conduct of clinical trials on medicinal products for human use.

Article 2.k of the Directive defined an ethics committee as “an independent body in a Member State, consisting of healthcare professionals and non-medical members”. These committees were required to give their opinion, before the clinical trials began, on a wide range of issues from the scientific aspects of the trial and its design, its benefits and risks, to the rights and freedoms of trial subjects.Footnote 101 A positive opinion of the ethics committee was obligatory before the clinical trial would be allowed to go ahead.Footnote 102

This legal framework has been recently reshaped by Regulation 536/2014 which repealed Directive 2001/20/EC introducing several changes to further harmonise and streamline clinical trial procedures. Although the role of ethics committees remains pivotal and their approval necessary for clinical trials to begin, the scope of their assessment and their composition have been modified, raising several criticisms.

Article 4 of the Regulation requires that the ethical review be performed in accordance with the law of the Member State involved, leaving its regulation up to the State itself, and stating that the review “may encompass aspects addressed in Part I of the assessment report for the authorisation of a clinical trial as referred to in Article 6 and in Part II of that assessment report as referred to in Article 7 as appropriate for each Member State concerned”. Here Part I refers to the scientific aspects of the trial (i.e. anticipated therapeutic and public health benefits, risks and inconveniences for the subject, completeness and adequateness of the investigator’s brochure, and legal compliance issues), while Part II concerns the ethical issues (informed consent, reward and compensation, recruitment, data protection, suitability of the individuals conducting the trials and the trial sites, damage compensation, biological samples).

The new provisions therefore separate the scientific assessment from the ethical assessment, leaving the Member State the freedom to limit the ethics assessment to Part II alone, as some countries have done already.Footnote 103 This undermines the holistic assessment of the clinical trialFootnote 104 which comprises the scientific aspects, its design and ethical issues.Footnote 105

Further concerns about the new Regulation regard the make-up of the ethics committees,Footnote 106 since it fails to establish rules for the composition and organisation of the committees.Footnote 107 It limits itself to the minimal requirementsFootnote 108 – expertise of the members, multidisciplinary backgrounds, participation of laypersons,Footnote 109 and the deliberative method – leaving the Member States to regulate these aspects.Footnote 110 This means that the States are free to decide both the organisational rules and the composition,Footnote 111 which are critical factors in the performance of the ethical assessment.

There is therefore significant variation among EU countries in national regulation of these committees, although their independenceFootnote 112 and a degree of involvement by laypersons (in particular, patients or patients’ organisations) are common requirements laid down by the Regulation.Footnote 113

3.2.4 Main Inputs in Addressing Ethical and Societal Issues in AI

The above overview of ethical bodies has shown a variety of needs – in many cases not circumscribed to ethics only but to various societal issues – addressed in different ways. There are four main areas in which the experience of existing ethics committees can contribute to framing future committees of experts to assess ethical and societal issues in AI: (i) subject matter, (ii) nature of the decisions, (iii) composition, and (iv) relationship with the beneficiaries of the assessment.

Regarding the subject matter, the work of ethics committees is characterised by an interplay between ethical and legal requirements which is also present in the debate on AI regulation.Footnote 114 For the RECs this is a consequence of an historical departure from ethics to include progressively regulated fields, such as privacy, safety, dual use, etc. However, in the case of AI, ethical guidelines often show an improper overlap between law and ethics which should be avoided. Legal issues concerning human rights and fundamental freedoms are more properly addressed by the HRIA, while committees should focus on the complimentary aspects of different societal issues, not covered by human rights regulation and practice.

Another problem with the subject matter of ethical committees concerns the interplay between scientific and ethical assessment. The experience of the ECREA ethics committees and clinical trials regulation suggest that expert committees should take a holistic approach to the assessment.

In such an approach the ethical issues are confronted together with the scientific aspects from the earliest stages of project design, as also suggested by the CIOMSFootnote 115 and originally by the EU legislation on clinical trials (Directive 2004/39/EC).

This is a response not only to criticism of the two-stage model which splits the research process from the ethical assessment,Footnote 116 but also because societal values need to be embedded in AI solutions from the outset, following a by-design approach. What is more, whereas the values relating to biomedical practices are domain-centred and largely universal, the variety of AI applications and their contextual implementation mean that societal values may differ from one context to another.

As regards the nature of the decisions adopted by ethics committees, an important distinction between CECs, Research Ethics Committees and ethics committees for clinical trials is the mandatory or advisory nature of their decisions. While the function and composition of all these models can provide valuable suggestions for future AI expert committees, the pros and cons of the different nature of their decisions should be weighed in regard to AI and are further discussed in the following section. The same considerations apply to the deliberation process, based on either consensus or majority.

Another aspect of ethical assessment that must be considered with respect to AI applications is the provisional character of the evaluation, given their further development and impact, and their learning capabilities. In this regard, the continuous monitoring provided by the models implemented by the ERCEA and in clinical trials offers a better solution than the prior assessment of hospital RECs.

Regarding the panels’ composition, in all the cases examined a crucial issue concerns the different levels of expertise required and the role of laypersons, including rights-holder and stakeholder participation. There is general agreement on the multidisciplinary and inclusive nature of ethics committees, but the value placed on expertise varies widely, as does the balance between internal and external experts, scientific experts and laypersons, and the engagement of rights-holders and stakeholders.

While the expert background of the committee members is crucial and impacts on the quality of the assessment, a concurrent factor is the importance of training for the components. A learn-by-doing approach is possible, but some general introductory training, even based on the most critical cases resolved, could be of help to stimulate cross-disciplinary dialogue.

Finally, with regard to the relationship with the beneficiary of the assessment, the CECs case reveals how, within the organisations setting up the committees, members who are not trained or focused on ethics may underestimate the importance of ethical issues, limiting the extent of their collaboration with ethics bodies. The presence of ethics panels may also encourage people to delegate ethical issues to them, rather than taking an ethical by-design approach to project development, as considered by the ERCEA with regard to some research projects.

3.3 Ad Hoc HRESIA Committees: Role, Nature, and Composition

As discussed in Chap. 1, the extensive and frequently severe impact of data-intensive AI systems on society cannot be fully addressed by the human rights legal framework. Many societal issues, often labelled ethical issues, concern non-legal aspects and involve community choices or individual autonomy requiring a contextual analysis focused on societal and ethical values.

In addition, human rights represent universal values accepted at international level by a large number of countries, but they are necessarily implemented in a range of contexts, and this implies a certain flexibility in the manner in which they are applied.

In the modular HRESIA model therefore, an important component in addressing these issues is the case-specific and contextualised examination which, by its nature, must inevitably be entrusted to expert assessment.

Expert committees can thus play an important role in contextualising the human rights part of the HRESIA and, at the same time, may complete the model regarding the ethical and social values most critical to the given community as well as concerns not covered by the legal framework.

A questionnaire-based approachFootnote 117 cannot fully address the complexity of AI systems and their related social and ethical issues. The case-specific nature of the problems requires a contextualised analysis with a direct involvement of experts, rights-holders and stakeholders. The longstanding focus on ethics in other fields, such as ethical committees in scientific research and medical practice, can offer input for thinking about an active role of dedicated bodies or functions within the development environment where data-intensive AI systems are created and used.

The previous sections explained how ethical assessment, which was originally applied to scientific research, has been recently endorsed by companies focusing on AI, a paradigm shift from research to industry which must be highlighted.

The reason for this shift is not only the greater role that industry, with its privileged access to data and computational power, can play in AI research and development.Footnote 118 The main reason is that AI-based systems are not static, but dynamically updated and also able to learn from the environment in which they operate, a factor which complicates the distinction between the research and industrial phases.

AI products are the fruit of living continual research and experimentation (see for example, the controversial Cambridge Analytica case). The uncertain side-effects of AI products/services raise ethical questions about the outcome of research and innovation in a field where many AI products can be viewed as a sort of living social experiment.

This change in perspective justifies the interest in ethics in AI development and highlights the chance to extend to industry the safeguards and models established with regard to ethics committees in scientific research.

The experience of ethics committees in other fields suggests that AI expert committees should adopt a holistic approach, looking at both the technical/scientific aspects and the societal issues of AI projects. This would foster a by-design approach to the societal consequences from the start of product development.

Other features are also suggested by this experience: (i) independence; (ii) multidisciplinary and inclusive nature; (iii) the role of training and education, both for committee members and for those dealing with social issues inside the entities developing and using AI solutions; (iv) procedural organisation and transparency of decision-making processes; (v) access to information on the products/services being assessed; (vi) provisional nature of the assessment and lifecycle monitoring of AI applications, including further assessments.Footnote 119

Despite the fact that these common features are shared by corporate AI ethics boards and provide a solid starting point for building future AI expert committees, the ethics committees and the corporate ethics boards discussed above vary widely in structure, including both internal and external bodies. This demonstrates that issues remain to be addressed and that various solutions are possible.

As in other fields, policy guidance on the non-legal aspects of AI can hardly be black and white and is necessarily context-based. This explains why, especially considering the nature of societal and ethical questions, the possible solutions can be combined and shaped in different ways depending on the specific AI application, its societal impact and the nature of the actors, rights-holders and stakeholders involved in its development and deployment.

In defining the key aspects of AI expert committees, several contextual elements must be taken into account, which also distinguish this field from biomedicine: (i) AI applications are much more diffuse and easy-to-develop than medicines; (ii) many AI applications involve forms of automation that have no or low societal consequences; (iii) human rights issues are addressed through the HRIA; (iv) various societal issues are primarily related to large-scale projects (e.g. AI-based automated student evaluation systems).

On the other hand, biomedical and research committees, while discussing ethical questions, also have to examine human rights and legal compliance issues as do the corporate AI ethics boards. Similarly, in the HRESIA model, experts play a significant role in both the human rights and the ethical/social assessment. The difference between the two components, however, is much more marked in the HRESIA and this is reflected in the importance of the experts in the assessment.

The human rights impact assessment does not rely chiefly on experts’ understanding of the legal framework, which is largely given. In the evidence-based risk assessment described in Chap. 2, experts contribute to planning and scoping the evaluation and to defining the level of risk depending on the model.

In the ethical and social component of the HRESIA, the experts’ role is much more significant in recognising the community’s values, which are context specific and often require active interaction with rights-holders and stakeholders to understand them fully. Here, experts operate in a less formalised context, compared with the human rights framework, and their assessment does not benefit from a quantifiable risk analysis, such as described in Chap. 2, but is closer to the deliberative processes of ethics committees discussed above.

As regards the social issues considered in the HRESIA of AI, these essentially concern the acceptability and substitution rate of the proposed AI solutions, rather than the traditional labour-related issues addressed by corporate social due diligence.

Acceptability refers to the conformity of an AI application with the societal and, in cases of customised products, individual values. For example, predictive policing systems, while authorised by law and including specific safeguards, may be seen as unacceptable by some communities.

Obviously, in the case of a conflict with human rights the problem of social acceptability does not arise. A social acceptability assessment therefore implies that the HRIA has already found the impact on human rights non-detrimental.

The same considerations apply to the substitution rate, which refers to the ability to offer feasible alternatives to AI applications,Footnote 120 where the latter entail possible, present or future, impacts on individuals and society. Substitution does not only concern technical solutions – AI-based versus non-AI embedded systems – but a wider approach to the problems AI is designed to solve. For example, a societal assessment of AI-based video-surveillance crime prevention systems should ask whether resources are best invested in these systems or in social aid measures for crime prevention.

This distinction in the scope of the impact assessment requires committee members with different backgrounds. While the HRIA involves human rights advisors and a human rights officer,Footnote 121 societal issues must be examined by figures with expertise in social science and ethics.

Given the less numerous cases in which AI applications raise ethical and social concerns, compared with the ethics committees in research and clinical trials, it is hard to imagine the institutionalisation of ethics committees as in those sectors. In this early stage of the AI era, the closest example is probably represented by the CECs, which serve to orient decision making, focus on raising awareness, and rights-holder and stakeholder engagement.

A number of contextual differences must also be considered. For example, where AI solutions are adopted by public bodies in the exercise of their powers (e.g. predictive policing, healthcare assistance, smart mobility, educational ratings, etc.) citizens often have no opt-out option and the AI systems are imposed by government on the basis of political or administrative choices.

In these cases, including those where public bodies exercise their powers in partnership with private companies, the appointment of an ad hoc expert committee, as in the ERCEA model,Footnote 122 could be a mandatory requirement. However, the increasing use of AI applications in a specific sector by a given administration might make the creation of permanent committees for clusters of similar AI applications advisable.Footnote 123

On the other hand, where the private sector provides products and services, AI expert committees might be created on a voluntary basis to better align AI development and deployment with the societal needs and context.

In both cases the independence of the committees is key and this will impact on the member selection criteria. This concerns not only consolidated practice on conflicts of interests, but also the balance between internal or external experts where a predominance of internal members may, directly or indirectly, result in the appointing body’s internal values and interests being overvalued at the expense of competing societal interests and values.Footnote 124

On the other hand, as in the case of CECs,Footnote 125 AI developers may be less inclined to seek the advice of experts who have no recognised authority within the institution and who are more likely to be out of touch with them. What is more, an integrated committee within the organisation can better monitor effective implementation of its advice and interact actively with developers.

It might be more helpful therefore to envisage an internal member of the expert committee or an internal advisor on societal issues as trait d’union between the committee and the AI developers. In certain cases, depending on the nature of the public or private body’s activity, where societal issues with AI are core and a frequent concern, a binary model could be adopted, with an expert acting as advisor on societal issues plus a committee of experts.

The advisor becomes a permanent contact for day-to-day project development, submitting the most critical issues to the committee, where the plurality of multidisciplinary views gives a greater assurance than the necessarily limited view of a single advisor. Finally, as in the model adopted by some CECs, participatory deliberation processesFootnote 126 could be implemented to facilitate a deeper understanding by developers and AI designers of the ethical and societal issues their work raises.Footnote 127

The advisor’s background here will clearly impact the entire process. For this reason and given the spectrum of issues associated with AI, rather than a background in ethics alone, the advisor should have a varied social science profile including the skills to recognise the overall impact of the proposals on society in general.

Looking at other specialist figures (e.g., the DPO in data protection law), the advisors may be either internal or externalFootnote 128 but in order to have effective impact on critical issues, decisions and practices, they must be truly independent, including with regard to financial resources, and report directly to top management.

Regarding the deliberation methods and the mandatory or consultative nature of the AI committee’s decisions, it is hard to drawn red lines. Consensus-based deliberations undoubtedly make for more inclusive decision-making in multidisciplinary panels, but may require more time and resources. Equally, mandatory decisions will impact AI manufacturers and developers more acutely, but involve a danger of their weaker engagement and accountability in the AI development and deployment process.Footnote 129 There is no one-size-fits-all solution, then, but different contexts probably require different approaches.

AI committees – such as RECsFootnote 130 – should play a supportive, collaborative and educational role. Alongside their main task of assessing societal impacts, they should contribute to education and policy formation within the appointing bodies.Footnote 131

Finally, a crucial aspect concerns the role of the AI expert committee in civic participation and stakeholder engagement.Footnote 132 Participatory issues can be addressed either inside or outside the committees, by including laypersons among their members – representing rights-holders, civil society and stakeholders – or by furthering interaction between the experts, rights-holders, civil society and stakeholders through interviews, focus groups or other participation tools.Footnote 133

The experience of the ethics committees highlights the value of giving laypersons and stakeholders – and in certain cases rights-holders – a voice directly within the committees. Nevertheless, the broader HRESIA model suggests a different approach combining in the committee human rights experts (for the HRIA) and social science experts (for the ethical and societal assessment), while adding specific tools for rightsholder and shareholder participation. It is worth remembering that participation can be valuable in assessing impacts on both human rights and societal impacts.

Meanwhile, the modular scheme keeping the three areas distinct, makes it possible to combine them according to the needs of the specific context. HRIA remains an obligatory step in the development and use of AI, but not all AI applications necessarily entail ethical and societal issues. The level of participation can also vary significantly depending on type of impact and the categories or population involved.

In addition, AI applications may impact a variety of interests and rightsholders/stakeholders, which in many cases are dispersed and not organised in civil society organisations.Footnote 134 In these cases, the HRESIA serves to identify these interests and potentially affected clusters of people who can only be involved following an initial assessment and not part of the expert committee from the outset.

Of course, this does not mean that where homogeneous impacted categories are evident from the earliest stages of an AI proposal (e.g. students and AI-based university admission tools) they cannot be given a voice or included in the assessment teams. Even here, however, given the complexity and variety of interests impacted by AI, participatory tools remain an important component of HRESIA implementation in identifying additional stakeholders and ensuring a wider rights-holder engagement. Direct participation differs from the engagement of spokespersons of selected stakeholders, who often fail to represent the majority of the categories or groups of people involved.

Participation tools are therefore vital to an effective democratic decision-making process on AI,Footnote 135 an inclusive approach that ensures choice is given to minorities, underrepresented and vulnerable categories.

Finally, in the human rights, ethical and societal assessment, experts should work actively towards a degree of disclosure about the process and its outcome to facilitate this participation. At the same time, interaction with rights-holders and stakeholders should be properly documented to guarantee accountability around their effective engagement.

3.4 Rights-Holder Participation and Stakeholder Engagement

As explained above, rights-holder participation and stakeholder engagement are crucial to HRIA and societal and ethical assessments. Regarding human rights, participation can provide a better understanding of potentially affected rights, including by disaggregating HRIA to focus on specific impacted categories,Footnote 136 and a way of taking into account the vernaculisation of human rights.Footnote 137 Moreover, where AI systems are used in decision-making processes, participation can also be seen as a significant human right in itself, namely the right to participate in public affairs.Footnote 138

As for societal and ethical assessments, given the contextual nature of the values in question, participation plays a crucial role in understanding the impact of AI systems, as a complement to the knowledge of the HRESIA experts. Here, participation is also important with regard to the specific issue of the substitution of AI-based solutions with alternative responses to the problems AI purports to address (substitution rate).Footnote 139

In the AI solution design process, participation can make contributions either at the initial stage of product/service design (discovery stageFootnote 140), during project development, or in its concrete implementation, including further post-market changes.

During the first stage, which defines only the overall goal of the product/service, rights-holders and stakeholders should be engaged in discussing the general problem and the potential substitution rate, if any.Footnote 141 Social science has suggested different forms of participation to achieve this goalFootnote 142 for implementation in the different contexts according to needs.

While it is outside the scope of this legal analysis to describe and discuss these methodologies and various results achievable,Footnote 143 it is evident that a comprehensive future regulation of AI should consider rights-holder and stakeholder engagement as crucial. This leads to two main regulatory consequences. First, the participation phase must be present in the assessment of AI projects, at least in high impact cases. Second, as participation methods require specific expertise, the HRIA and societal assessment experts should be supported by social scientists in designing participation.

Though crucial from the start of the projects, voluntary participation in detecting the key factors of the potential impacts of AI systemsFootnote 144 necessarily requires a preliminary desk analysis by human rights and social science experts to identify possible impacted interests and better target any participatory initiatives. Here, underestimation or overestimation of a specific interest may affect the outcome of the entire AI project, as in the case of Toronto’s Sidewalk.Footnote 145

Having defined the targets, potential participants must be properly informed about the goals and structure of the project. Many data-intensive AI projects involve detailed technical knowledge and it is important to facilitate understanding of these aspects by providing easily accessible, general and neutral information about the technologies and their workings.

As disclosure about project design and content, especially in large-scale projects, may entail competition issues or, more generally, competing third party interests, limited disclosure or confidentiality agreements should be considered.

Meanwhile, participants may disclose personal or relational information at interviews or participation. In these cases, those performing the HRESIA should be subject to confidentiality obligations.

Finally, as far as is consistent with the nature and purpose of participation, participants should receive feedback about their impact on product/service design. This is particularly important where there are clearly identified and homogeneous categories of potentially affected individuals (e.g. consumers, students, etc.).

Following these guidelines, an effective and properly designed participation strategy can achieve two main results: reducing assessment bias and increasing trust in AI services and products, which are often obscure and consist in closed top-down solutions.

In terms of bias reduction, participation helps experts to think outside the box, considering new issues or examining those already identified from a different angle not necessarily reflecting their interpretation of societal values and constructs.Footnote 146 On the other hand, where the experts’ views are confirmed by participatory evidence, the groundwork may offer a better understanding of the problems and the societal dimension of AI applications.

Trust, a key issue in the adoption of AI systems by individuals and communities,Footnote 147 is a complex and longstanding notion in technology development regarding the relationship between human artefacts, those who builds them, and users,Footnote 148 and comprises the capability of a given technology, often influenced by emotional and other non-rational factors.Footnote 149

The HRESIA assessment model can give users reasons to trust in AI, conscious that the potential negative consequences have been properly considered and addressed. The active engagement of stakeholders, rights-holders and users in the design process and in the assessment can have a positive effect on the relational dimension of trust.

Participation can also evolve into a more complex relationship between AI developers and end-users, opening up to co-design approaches. Given the importance of technology in actively shaping society,Footnote 150 the public should not play a passive role delegating all the design decisions to manufacturers/service providers, even where value-oriented methods are guaranteed.Footnote 151

On the other hand, the added value of participation should not tempt us to underestimate the risks in this process. In the first place, it is crucial to combat misuse of the solution by “participation washing”, guiding the target population towards expected outcomes.Footnote 152 Independent human rights and social experts should serve as a barrier to manipulation, as well as the fact that the committees themselves, and not the AI manufacturers, are responsible for the assessments.

Another critical issue concerns the voluntary nature of participation. Potential biases in the social composition of participants in favour of wealthy and educated people, polarisation due to the greater presence of highly motivated people representing minority clusters, the risk of exclusion due to the use of technology-based tools (e.g. online participation platforms), as well as cases of participants covertly acting on behalf of certain stakeholders to reinforce their position while presenting it as widely held, are challenges common to all volunteer-based approaches.

The issues with AI systems do not alter either these risks or the solutions, such as deliberative pooling and participant selection, affirmative actions and incentives for low-status and low-income citizens, and other strategies already commonly used in participation practice.

Similarly, past experience in participation may suggest limiting citizen engagement in AI design to the strictly necessary avoiding too many meetings that tend to reduce interest and the level of participation.Footnote 153 We must also remember that an enlargement of participation entails additional costs for those who build and use AI systems, so that a balance between potential risks and effort required must be reached.

The modular structure of the HRESIA can help in this regard as the level of participation required can vary significantly depending on the type of AI application and the categories or population impacted. Participatory tools can be simplified in some cases by reducing them, for instance, to rights-holder and stakeholder interviews or open consultations.

3.5 Summary

AI systems pose questions that go beyond their impact on human rights and freedoms and regard their social acceptability and coherence with the values of the community in which they are to be used. Nevertheless, this broader consideration of the consequences of AI should not create an improper overlap between legal and ethical/social values.

The social and ethical consequences of AI represent a complementary dimension alongside that of human rights that must be properly investigated to mitigate adverse effects for individuals and society. The HRESIA therefore includes a module focused on the ethical and social impact assessment, to capture the holistic dimension of the relationship between humans and machines.

This complementarity also concerns the interests examined, with the HRIA preceding the ethical and social assessment as a preliminary step, given the binding nature of human rights. Only after the proposed solution has been found to be compliant with the human rights principles are the ethical and social consequences investigated.

The societal assessment is more complicated than that of human rights. Whereas the latter refers to a well-defined benchmark – even considering contextual implementation and vernaculisation –, the ethical and social framework involves a variety of theoretical inputs on the underlying values, as well as a proliferation of guidelines, in some cases partially affected by ‘ethics washing’ or reflecting corporate values.

This requires a contextualised and, as far as possible, a participative analysis of the values of the community in which the AI solutions are expected to be implemented. Here the experts play a crucial role in detecting, contextualising and evaluating the AI solutions against existing ethical and social values.

Much more than in the human rights assessment, experts are therefore decisive in grasping the relevant community values, given their context specific nature and, in many cases, the need for active interaction with rights-holders and stakeholders to better understand them.

Experts can be involved in AI assessment in a variety of ways, as demonstrated recently by the ethics boards in digital economy companies. The structure, composition and internal organisation of the expert committees are not neutral elements, but can influence the outcome of the assessment in terms of quality and reliability of the results, and the independent nature of the evaluation.

This explains how ethics committees in scientific research, bioethics and clinical trials can provide inputs for future AI expert committees within the HRESIA model. While certain key elements can be identified (e.g. independence, multidisciplinary, and inclusiveness of the committee; transparency of internal procedures and decisional processes; provisional character of their decisions), the committees present a variety of structures and types of organisation in terms of member qualifications, rights-holder, stakeholder, and layperson participation, and internal or external experts.Footnote 154 This demonstrates not only the presence of open issues that remain to be addressed, but also that there is no a one-size-fits-all solution: the differing nature and contextual importance of ethical and societal interests may require different approaches to the role of experts.

One solution in organisations focused on AI and its use could be the figure of an internal advisor on societal issues as a permanent contact for day-to-day project development and a trait d’union with the HRESIA experts. This would also help to foster internal participatory deliberation through interaction with the AI developers.

Finally, experts tasked with performing an ethical and social impact assessment operate in a less formalised context than the human rights framework. They cannot benefit from the quantifiable risk analysis described in Chap. 2, but mainly rely on an exchange of opinions within a deliberative process similar to that discussed for ethics committees.

Just as with the HRIA, ethical and societal assessments also have an influence on the design of AI solutions, especially with regard to acceptability and the substitution rate of the proposed AI solution. They not only examine the AI product/service itself, but look at a broader range of alternative possibilities to address the needs identified, not necessarily AI-based.

Based on the experience of the ethics committees, the AI assessment cannot be entrusted entirely to experts and their interaction with stakeholders. It should also include a participatory dimension, which is essential to effective democratic decision-making process concerning AI. An inclusive approach can also contribute to a better understanding of the societal and ethical issues, as well as the context-specific human rights concerns. Furthermore, the modular HRESIA structure makes it possible to vary the level and focus of participation depending on the area under assessment.