Keywords

2.1 Introduction

The debate that has characterised the last few years on data and Artificial Intelligence (AI) has been marked by an emphasis on the ethical dimension of the use of data (data ethics)Footnote 1 and by a focus on potential bias and risk of discrimination.Footnote 2

While data processing regulation has been focused for decades on the law, including the interplay between data use and human rights, this debate on data-intensive AI systems has rapidly changed its trajectory, from law to ethics.Footnote 3 This is evident not only in the literature,Footnote 4 but also in the political and institutional discourse.Footnote 5 In this regard, an important turning point was the European Data Protection Supervisor (EDPS) initiative on digital ethicsFootnote 6 which led to the creation of the Ethics Advisory Group.Footnote 7

As regards the debate on data ethics, it is interesting to consider two different and chronologically consecutive stages: the academic debate and the institutional initiatives. These contributions to the debate are different and have given voice to different underlying interests.

The academic debate on the ethics of machines is part of the broader and older reflection on ethics and technology. It is rooted in known and framed theoretical models, mainly in the philosophical domain, and has a methodological maturity. In contrast, the institutional initiatives are more recent, have a non-academic nature and aim at moving the regulatory debate forward, including ethics in the sphere of data protection. The main reason for this emphasis on ethics in recent years has been the growing concern in society about the use of data and new data-intensive applications, from Big DataFootnote 8 to AI.

Although similar paths are known in other fields, the shift from the theoretical analysis to the political arena represents a major change. The political attention to these issues has necessarily reduced the level of analysis, ethics being seen as an issue to be flagged rather than developing a full-blown strategy for ethically-oriented solutions. In a nutshell, the message of regulatory bodies to the technology environment was this: law is no longer enough, you should also consider ethics.

This remarkable step forward in considering the challenges of new paradigms had the implicit limitation of a more general and basic ethical framework, compared to the academic debate. In some cases, only general references to the need to consider ethical issues has been added to AI strategy documents, leaving the task of further investigation to the recipients of these documents. At other times, as in the case of the EDPS, a more ambitious goal of providing ethical guidance was pursued.

Methodologically, the latter goal has often been achieved by delegating the definition of guidelines to committees of experts, including some forms of wider consultation. As in the tradition of expert committees, a key element of this process is the selection of experts.

These committees were not only composed of ethicists or legal scholars but had a different or broader composition defined by the appointing bodies.Footnote 9 Their heterogeneous nature made them more similar to multi-stakeholder groups.

Another important element of these groups advising policymakers concerns their internal procedures: the actual amount of time given to their members to deliberate, the internal distribution of assigned tasks (in larger groups this might involve several sub-committees with segmentation of the analysis and interaction between sub-groups), and the selection of the rapporteurs. These are all elements that have an influence in framing the discussion and its results.

All these considerations clearly show the differences between the initial academic debate on ethics and the same debate as framed in the context of institutional initiatives. Moreover, this difference concerns not only structure and procedures, but also outcomes. The documents produced by the experts appointed by policymakers are often minimalist in terms of theoretical framework and focus mainly on the policy message concerning the relevance of the ethical dimension.

The variety of the ethical approaches, the lack of clear indications on the frame of reference or the reasons for preferring a certain ethical framework make it difficult to understand the key choices on the proposed ethical guidelines.Footnote 10 Moreover, the local perspective of the authors of these documents, in line with the context-dependent nature of ethical values, undermines the ambition to provide global standards or, where certain values are claimed to have general relevance, may betray a risk of ethical colonialism.

These shortcomings that characterise a purely ethical discourse on AI regulation – which are analysed in more detail in Chap. 3 – lead us to turn our gaze towards more well-established and commonly accepted frameworks such as that provided by human rights, the implementation of which in the field of AI is discussed in the following sections.

2.2 A Legal Approach to AI-Related Risks

In considering the impact of AI on human rights, the dominant approach in many documents is mainly centred on listing the rights and freedoms potentially impactedFootnote 11 rather than operationalising this potential impact and proposing assessment models.

However, case-specific assessment is more effective in terms of risk prevention and mitigation than using risk presumptions based on an abstract classification of high-risk sectors or high-risk uses/purposes, where sectors, uses and purposes are very broad categories which include different kind of applications – some of them continuously evolving – with a variety of potential impacts on rights and freedoms that cannot be clustered ex ante on the basis of risk thresholds, but require a case-by-case impact assessment.Footnote 12

Similarly, the adoption of a centralised technology assessment carried out by national ad hoc supervisory authoritiesFootnote 13 can provide useful guidelines for technology development and can be used to fix red linesFootnote 14 but must necessarily be complemented by a case-specific assessment of the impact of each application developed.

For these reasons, a case specific impact assessment remains the main tool to ensure accountability and the safeguarding of individual and collective rights and freedoms. In this regard, a solution to the problem could easily be drawn from the human rights impact assessment models already adopted in several fields.

However, these models are usually designed for different contexts than those of AI applications.Footnote 15 The latter are not necessarily large-scale projects involving entire regions with multiple social impacts. Although there are important data-intensive projects in the field of smart cities, regional services (e.g. smart mobility) or global services (e.g. online content moderation provided by big players in social media), the AI operating context for the coming years will be more fragmented and distributed in nature, given the business environment in many countries, often dominated by SMEs, and the variety of communities interested in setting-up AI-based projects. The growing number of data scientists and the decreasing cost of hardware and software solutions, as well as their delivery as a service, will facilitate this scenario characterised by many projects with a limited scale, but involving thousands of people in data-intensive experiments.

For such projects, the traditional HRIA models are too articulated and oversized, which is why it is important to provide a more tailored model of impact assessment, at the same time avoiding mere theoretical abstractions based on generic decontextualised notions of human rights.

Against this background, it is worth briefly considering the role played by impact assessment tools with respect to the precautionary principle as an alternative way of dealing with the consequences of AI.

As in the case of potential technology-related risks, there are two different legal approaches to the challenges of AI: the precautionary approach and the risk assessment. These approaches are alternative, but not incompatible. Indeed, complex technologies with a plurality of different impacts might be better addressed though a mix of these two remedies.Footnote 16

As risk theory states, their alternative nature is related to the notion of uncertainty.Footnote 17 Where a new application of technology might produce potential serious risks for individuals and society, which cannot be accurately calculated or quantified in advance, a precautionary approach should be taken.Footnote 18 In this case, the uncertainty associated with applications of a given technology makes it impossible to conduct a concrete risk assessment, which requires specific knowledge of the extent of the negative consequences, albeit in specific classes of risks.Footnote 19

Where the potential consequences of AI cannot be fully envisaged, as in the case of the ongoing debate on facial recognitions and its applications, a proper impact assessment is impossible, but the potentially high impact on society justifies specific precautionary measures (e.g., a ban or restriction on the use of AI-based facial recognition technologies).Footnote 20 This does not mean limiting innovation, but investigating more closely its potentially adverse consequences and guiding the innovation process and research,Footnote 21 including the mitigation measures (e.g. containment strategies, licensing, standards, labelling, liability rules, and compensation schemes).

On the other hand, where the level of uncertainty is not so high, the risk-assessment process is a valuable tool in tackling the risks stemming from technology applications. According to the general theory on the risk-based approach, the process consists of four separate stages: (1) identification of risks, (2) analysis of the potential impact of these risks, (3) selection and adoption of the measures to prevent or mitigate the risks, (4) periodic review of the effectiveness of these measures.Footnote 22 Furthermore, to enable subsequent monitoring of the effective level of compliance, duty bearers should document both the risk assessment and the measures adopted.

Since neither the precautionary principle nor the risk assessment are an empty list but rather focus on specific rights and freedoms to be safeguarded, they can be seen as two tools for developing a human rights-centred technology. While the uncertainty of some technology solutions will lead to the application of the precautionary principle, a better awareness and management of related risk will enable a proper assessment.

However, the relationship between risk assessment and the precautionary principle is rather complicated and cannot be reduced to a strict alterative. Indeed, when a precautionary approach suggests that a technology should not be used in a certain social context, this does not necessary entail halting its development. On the contrary, where there is no incompatibility with human rightsFootnote 23 the technology can be developed further to reach a sufficient level of maturity that shows awareness of the related risks and the effective solutions.

This means that, in these cases, human rights can play an additional role in guiding development such that, once it reaches a level of awareness of the potential consequences that exclude uncertainty, will be subject to risk assessment.

Under this reasoning, two different scenarios are possible. One in which the precautionary principle becomes an outright ban on a specific use of technology and the other in which it restricts the adoption of certain technologies but not their further development. In the latter case, a precautionary approach and a risk assessment are two different phases of the same approach rather than an alternative response.

2.3 Human Rights Impact Assessment of AI in the HRESIA Model

Having defined the importance of a human rights-oriented approach in AI design and use, and the role that impact assessment procedure can play in this respect,Footnote 24 it is worth noting that traditional Human Rights Impact Assessment (HRIA) models are often territory-based considering the impact of business activities in a given local area and community, whereas in the case of AI applications this link with a territorial context may be less significant.

There are two different scenarios: cases characterised by use of AI in territorial contexts with a high-impact on social dynamics (e.g. smart cities plans, regional smart mobility plans, predictive crime programmes) and those where AI solutions have a more limited impact as they are embedded in globally distributed products/services (e.g. AI virtual assistants, autonomous cars, recruiting AI-based software, etc.) and do not focus on a given socio-territorial community. While in the first case the context is very close to the traditional HRIA cases, where large-scale projects affect whole communities and the potential impacts cover a wide range of human rights, the second case is characterised by a more limited social impact, often focusing more on individuals rather than on society at large.Footnote 25 This difference has a direct effect on the structure and complexity of the model, as well as the tool employed.

Criteria such as the AAAQ framework,Footnote 26 for example, or issues concerning property and lands, can be used in assessing a smart city plan, but are unnecessary or disproportionate in the case of an AI-based recruitment software. Similarly, a large-scale mobility plan may require a significant monitoring of needs through interviews of rightsholders and stakeholders, while in the case of an AI-based personal IoT device this phase can be much reduced.

In both these scenarios, the two most relevant novelties introduced by the HRESIA with regard to its HRIA module concern the ex ante nature of the assessment carried and the greater focus on quantifiable risk thresholds.

Regarding the former, the ex ante approach is required by the guiding role that HRESIA aims to play in project design and development, as opposed to the ex post evaluation centred on corrective policies that often characterises traditional HRIA.Footnote 27 Moreover, here, the pervasive and varied nature of data-intensive AI systems and their components leads to a reflection on the challenges that large-scale AI poses with respect to multi-factor scenarios.Footnote 28

Concerning the focus on risk thresholds, this is in line with the requirements emerging in the regulatory debate on AIFootnote 29 where the definition of different risk levels is crucial in acceptability of AI products/services and has a direct impact on the obligations of AI manufacturers, providers and users. A quantitative dimension of assessment, in terms of ranges of risks, is therefore needed both for AI deign guidance and legal compliance.

Notwithstanding these important differences influencing the assessment methodology, the main building blocks of the model described here – planning and scoping, data collection (including rightsholder and stakeholder consultation) and analysis – remain the same as those used in HRIA and are examined in detail in the following sub-sections.

2.3.1 Planning and Scoping

The first stage deals with definition of the HRIA target, identifying the main features of the product/service and the context in which it will be placed, in line with the context-dependent nature of the HRIA. Three are the main areas to consider at this stage:

  • description and analysis of the type of product/service, including data flows and data processing purposes

  • the human rights context (contextualisation on the basis of local jurisprudence and laws)

  • identification of rightsholders and stakeholders.

The Table 2.1 provides a non-exhaustive list of potential questions for HRIA planning and scoping.Footnote 30 The extent and content of these questions will depend on the specific nature of the product/service and the scale and complexity of its development and deployment.Footnote 31 This list is therefore likely to be further supplemented with project-specific questions.Footnote 32

Table 2.1 Planning and scoping.

2.3.2 Data Collection and the Risk Analysis Methodology

While the first stage is mainly desk research, the second focuses on gathering relevant empirical evidence to assess the product/service’s impact on human rights and freedoms. In traditional HRIA this usually involves extensive fieldwork. But in the case of AI applications, data collection and analysis is restricted to large-scale projects such as those developed in the context of smart cities, where different services are developed and integrated. For the remaining cases, given the limited and targeted nature of each application, data collection is largely related to the product/service’s features and feedback from stakeholders.

Based on the information gathered in the previous stage (description and analysis of the type of product/service, human rights context, controls in place, and stakeholder engagement), we can proceed to a contextual assessment of the impact of AI use on human rights, to understand which rights and freedoms may be affected, how this may occur, and which potential mitigation measures may be taken.

Since in most cases the assessment is not based on measurable variables, the impact on rights and freedoms is necessarily the result of expert evaluation,Footnote 34 where expert opinion relies on knowledge of case law, the literature, and the legal framework. This means that it is not possible to provide precise measurement of the expected impacts but only an assessment in terms of range of risk (i.e. low, medium, high, or very high).

The benchmark for this assessment is therefore the jurisprudence of the courts and independent bodies (e.g. data protection authorities, equality bodies) that deal with human rights in their decisions. Different rights and freedoms may be relevant depending on the specific nature of the given application.

Examination of any potentially adverse impact should begin with a general overview followed by a more granular analysis where the impact is envisaged.Footnote 35 In line with normal risk assessment procedures, three key factors must be considered: risk identification, likelihood (L), and severity (S). As regards the first, the focus on human rights and freedoms already defines the potentially affected categories and the case specific analysis identifies those concretely affected, depending on the technologies used and their purposes. Since this is a rights-based model, risk concerns the prejudice to rights and freedoms, in terms of unlawful limitations and restrictions, regardless of material damage.

The expected impact of the identified risks is assessed by considering both the likelihood and the severity of the expected consequences, using a four-step scale (low, medium, high, very high) to avoid any risk of average positioning.

Likelihood is the combination of two elements: the probability of adverse consequences and the exposure. The former concerns the probability that adverse consequences of a certain risk might occur (Table 2.2) and the latter the potential number of people at risk (Table 2.3). In considering the potential impact on human rights, it is important not only to consider the probability of the impact, but also its extension in terms of potentially affected people.

Table 2.2 Probability
Table 2.3 Exposure

Both these variables must be assessed on a contextual basis, considering the nature and features of the product and service, the application scenario, previous similar cases and applications, and any measures taken to prevent adverse consequences. Here, the engagement of relevant shareholders can help to better understand and contextualise these aspects, alongside the expertise of those carrying out the impact assessment.

These two variables are combined in the combinatorial Table 2.4 using a cardinal scale to estimate the overall likelihood level (L). This table can be further modified on the basis of the context-specific nature of assessed AI systems and feedback received from experts, rightsholders and stakeholders.

Table 2.4 Likelihood table (L)

The severity of the expected consequences (S) is estimated by considering the nature of potential prejudice in the exercise of rights and freedoms and their consequences. This is done by taking into account the gravity of the prejudice (gravity), and the effort to overcome it and to reverse adverse effects (effort) (Tables 2.5 and 2.6).

Table 2.5 Gravity of the prejudice
Table 2.6 Effort to overcome the prejudice and to reverse adverse effects

As in the case of likelihood, these two variables are combined in a table (Table 2.7) using a cardinal scale to estimate the severity level (S).

Table 2.7 Severity table (S)

A Table 2.8 for the overall assessment charts both variables – likelihood (L) and severity (S) of the expected consequences – against each envisaged risk to rights and freedoms (R1, R2, … Rn).

Table 2.8 Table of envisaged risks

The overall impact for each examined risk, taking into consideration the L and S values, is determined using a further table (Table 2.9). The colours represent the overall impact, which is very high in the dark grey sector, high in the grey sector, medium in the lighter grey sector and is low in the light grey sector.

Table 2.9 Overall risk impact table

Once the potentially adverse impact has been assessed for each of the rights and freedoms considered, a radial graph is charted to represent the overall impact on them. This graph is then used to decide the priority of intervention in altering the characteristics of the product/service to reduce the expected adverse impacts. See Fig. 2.1.Footnote 36

Fig. 2.1
figure 1

Source The author

Radial graph (impact) example.

To reduce the envisaged impacts, factors that can exclude the risk from a legal perspective (EFs) – such as the mandatory nature of certain impacting features or the prevalence of competing interests recognised by law – and those that can reduce the risk by means of appropriate mitigation measures (MMs) should be considered.

After the first adoption of the appropriate measures to mitigate the risk, further rounds of assessment can be conducted according to the level of residual risk and its acceptability, enriching the initial table with new columns (Table 2.10).

Table 2.10 Comparative risk impact analysis table (before/after mitigation measures and excluding factors)

The first two new columns show any risk excluding factors (EFs) and mitigation measures (MMs), while the following two columns show the residual likelihood (rL) and severity (rS) of the expected consequences, after accounting for excluding and mitigation factors. The last column gives the final overall impact, using rL and rS values and the overall impact table (Table 2.9); this result can also be represented in a new radial graph. Note that it is also possible to estimate the total overall impact, as an average of the impacts on all the areas analysed. But this necessarily treats all the different impacted areas (i.e. rights and freedoms) as having the same importance and is therefore a somewhat imprecise synthesis.Footnote 37

In terms of actual effects on operations, the radial graph is therefore the best tool to represent the outcome of the HRIA, showing graphically the changes after introducing mitigation measures. However, an estimation of overall impact could also be made in future since several legislative proposals on AI refer to an overall impact of each AI-based solution,Footnote 38 using a single risk scale covering all potential consequences.

2.4 The Implementation of the Model

The next two sub-sections examine two possible applications of the proposed model, with two different scales of data use. The first case, an Internet-connected doll equipped with AI, shows how the impact of AI is not limited to adverse effects on discrimination, but has a wider range of consequences (privacy and data protection, education, freedom of thought and diversity, etc.), given the innovative nature of the application and its interaction with humans.

This highlights the way in which AI does not merely concern data and data quality but more broadly the transformation of human-machine interaction by data-intensive systems. This is even more evident in the case of the smart cities, where the interaction is replicated on large scale affecting a whole variety of human behaviours by individuals, groups and communities.

The first case study (an AI-powered doll) shows in detail how the HRIA methodology can be applied in a real-life scenario. In the second case (a smart city project) we do not repeat the exercise for all the various data-intensive components, because a full HRIA would require extensive information collection, rightsholder and stakeholder engagement, and supply-chain analysis,Footnote 39 which go beyond the scope of this chapter.Footnote 40 But above all, the purpose of this second case study is different: to shed light on the dynamics of the HRIA in multi-factor scenarios where many different AI systems are combined.

Indeed, a smart city environment is not a single device, but encompasses a variety of technical solutions based on data and algorithms. The cumulative effect of integrating many layers results in a whole system that is greater and more complicated than the sum of its parts.

This explains why the assessment of potential risks to human rights and freedoms cannot be limited to a fragmented case-by-case analysis of each application. Rather, it requires an integrated approach that looks at the whole system and the interaction among its various components, which may have a wider impact than each component taken separately.

Scale and complexity, plus the dominant role of one or a few actors, can produce a cumulative effect which may entail multiple and increased impacts on rights and freedoms, requiring an additional integrated HRIA to give an overall assessment of the large-scale project and its impacts.

2.4.1 A Case Study on Consumer Devices Equipped with AI

Hello Barbie was an interactive doll produced by Mattel for the English-speaking market, equipped with speech recognition systems and AI-based learning features, operating as an IoT device. The doll was able to interact with users but did not interact with other IoT devices.Footnote 41

The design goal was to provide a two-way conversation between the doll and the children playing with it, including capabilities that make the doll able to learn from this interaction, e.g. tailoring responses to the child’s play history and remembering past conversations to suggest new games and topics.Footnote 42 The doll is no longer marketed by Mattel due to several concerns about system and device security.Footnote 43

This section discusses the hypothetical case, imagining how the proposed assessment modelFootnote 44 could have been used by manufactures and developers and the results that might have been achieved.

2.4.1.1 Planning and Scoping

Starting with the questions listed in Table 2.1 above and information on the case examined, the planning and scoping phase would summarise the key product characteristics as follows:

  1. (a)

    A connected toy with four main features: (i) programmed with more than 8,000 lines of dialogueFootnote 45 hosted in the cloud, enabling the doll to talk with the user about “friends, school, dreams and fashion”;Footnote 46 (ii) speech recognition technologyFootnote 47 activated by a push-and-hold button on the doll's belt buckle; (iii) equipped with a microphone, speaker and two tri-colour LEOs embedded in the doll’s necklace, which light up when the device is active; (iv) a Wi-Fi connection to provide for two-way conversation.Footnote 48

  2. (b)

    The target-user is an English-speaking child (minor). Theoretically the product could be marketed worldwide in many countries, but the language barrier represents a limitation.

  3. (c)

    The right-holders can be divided into three categories: direct users (minors), supervisory users (parents, who have partial remote control over the doll and the doll/user interaction) and third parties (e.g. friends of the direct user or re-users of the doll).

  4. (d)

    Regarding data processing, the doll collects and stores voice-recording tracks based on dialogues between the doll and the user; this information may include personal dataFootnote 49 and sensitive information.Footnote 50

  5. (e)

    The main purpose of the data processing and AI is to create human–robot interaction (HRI) by using machine learning (ML) to build on the dialogue between the doll and its young users. There are also additional purposes: (i) educational; (ii) parental control and surveillanceFootnote 51 (parents can listen, store and re-use recorded conversations);Footnote 52 (iii) direct advertising to parents;Footnote 53 (iv) testing and service improvement.Footnote 54

  6. (f)

    The chief duty-bearer is the producer, but in connected toys other partners – such as ToyTalk in the Hello Barbie case – may be involved in the provision of ML, cloud and marketing services.

Another important set of data to be collected at this stage concerns the potential interplay with human rights and the reference framework, including main international/regional legal instruments, relevant courts or other authoritative bodies, and relevant decisions and provisions.

As regards the rights potentially affected, depending on the product’s features and purposes, data protection and the right to privacy are the most relevant due to the possible content of the dialogue between the doll and the user, and the parental monitoring. Here the legal framework is represented by a variety of regulations at different levels. Compliance with the US COPPAFootnote 55 and the EU GDPRFootnote 56 can cover large parts of the potential market of this product and international guiding PrinciplesFootnote 57 can facilitate the adoption of global policies and solutions.

Moreover, in relation to data processing and individual freedom of choice, the potential effects of marketing strategies can also be considered as forms of freedom of expressionFootnote 58 and freedom to conduct a business.

Given the broad interaction between the doll and the user and the behavioural, cultural and educational influence that the doll may have on young users,Footnote 59 further concerns relate to freedom of thought and diversity.Footnote 60

In the event of cyberattack and data theft or transmission of inappropriate content to the user through the doll, safety issues also arise and may impact on the right to psychological and physical safety and health.

With the potentially global distribution of the toy, the possible impacts need to be further contextualised within each relevant legal framework, taking into consideration local case law and that of regional supranational bodies like the European Court of Human rights. In this regard, it is necessary during the scoping phase to identify the significant provisions and decisions in the countries/regions where the product is distributed.

The last aspect to be considered in planning and scoping HRIA concerns the identification and engagement of potential stakeholders. In the case of connected toys, the most important stakeholders are likely to be parents’ associations, educational bodies, professional associations (e.g. psychologists and educators), child, consumer and data protection supervisory bodies, as well as trade associations. Stakeholders may also include the suppliers involved in product/service development. In the latter case, the HRIA must also assess the activities by these suppliers and may benefit from an auditing procedureFootnote 61 or the adoption of standards.

The following sections describe an iterative assessment process, starting from the basic idea of the connected AI-equipped toy with its pre-set functionality and moving on to a further assessment considering additional measures to mitigate unaddressed, or only partially addressed, concerns.

2.4.1.2 Initial Risk Analysis and Assessment

The basic idea of the toy is an interactive doll, equipped with speech recognition and learning features, operating as an IoT device. The main component is a human-robot voice interaction feature based on AI and enabled by Internet connection and cloud services.

The rights potentially impacted are data protection and privacy, freedom of thought and diversity, and psychological and physical safety and health.Footnote 62

Data Protection and the Right to Privacy

While these are two distinct rights, for the purpose of this case study we considered them together.Footnote 63 Given the main product features, the impact analysis is based on following questions:Footnote 64

  • Does the device collect personal information? If yes, what kind of data is collected, and what are the main features of data processing? Can the data be shared with other entities/persons?

  • Can the connected toy intrude into the users’ private sphere?

  • Can the connected toy be used for monitoring and surveillance purposes? If yes, is this monitoring continuous or can the user stop it?

  • Do users belong to vulnerable categories (e.g. minors, elderly people, parents, etc.)?

  • Are third parties involved in the data processing?

  • Are transborder data flows part of the processing operations?

Taking into account the product’s nature, features and settings (i.e. companion toy, dialogue recording, personal information collection, potential data sharing by parents) the likelihood of prejudice can be considered very high (Table 2.4). The extent and largely unsupervised nature of the dialogue between the doll and the user, as well as the extent of data collection and retention make the probability high (Table 2.2). In addition, given its default features and settings, the exposure is very high (Table 2.3) since all the doll’s users are potentially exposed to this risk.

Regarding risk severity, the gravity of the prejudice (Table 2.5) is high, given the subjects involved (young children and minors), the processing of personal data in several main areas, including sensitive information,Footnote 65 and the extent of data collection. In addition, unexpected findings may emerge in the dialogue between the user and the doll, as the harmless topics prevalent in the AI-processed sentences can lead young users to provide personal and sensitive information. Furthermore, the data processing also involves third parties and transborder data flows, which add other potential risks.

The effort to overcome potential prejudice or to reverse adverse effects (Table 2.6) can be considered as medium, due to the potential parental supervision and remote control, the nature of the doll’s pre-selected answers and the adoption of standard data security measures that help to overcome suffered prejudice with a few difficulties (e.g. data erasure, dialogue with the minor in case of unexpected findings). Combining high gravity and medium effort, the resulting severity (Table 2.7) is medium.

If the likelihood of prejudice can be considered very high and the severity medium, the overall impact according to Table 2.9 is high.

Freedom of Thought , Parental Guidance and the Best Interest of the Child

Based on the main features of the product, the following questions can be used for this analysis:

  • Is the device able to transmit content to the user?

  • Which kind of relationships is the device able to create with the user?

  • Does the device share any value-oriented messages with the user?

    • If yes, what kind of values are communicated?

    • Are these values customisable by users (including parents) or on the basis of user interaction? If so, what range of alterative value sets is provided?

    • Are these values the result of work by a design team characterised by diversity?

Here the case study reveals the critical impact of AI on HRI owing to the potential content imparted through the device. This is even more critical in the context of toys where the interactive nature of AI-powered dolls changes the traditional interaction into a relational experience.Footnote 66

In the model considered (Hello Barbie), AI creates a dialogue with the young user by selecting the most appropriate sentence from the more than 8,000 lines of dialogue available in its database. On the one hand, this enables the AI to express opinions which may also include value-laden messages, as in this sentence: “It’s so cool that you want to be a mom someday”.Footnote 67 On the other, some value-based considerations are needed to address educational issues concerning “inappropriate questions”Footnote 68 where the problem is not the AI reaction (Hello Barbie responds “by asking a new question”Footnote 69), as previously, but the notion of appropriateness, which necessarily involves a value-oriented content classification by the AI system.

As these value-laden features of AI are inevitably defined during the design process, the composition of the design team, its awareness of cultural diversity and pluralism are key elements that impact on freedom of thought, in terms of default values proposed and the availability of alternative settings. In addition, the decision to provide only one option or several user-customisable options in the case of value-oriented content is another aspect of the design phase that can limit parents’ freedom to ensure the moral and religious education of their children in accordance with their own beliefs.

This aspect highlights the paradigm shift brought by AI to freedom of thought and the related parental guidance in supporting the exercise by children of their rights.Footnote 70 This is even more evident when comparing AI-equipped toys with traditional educational products, such as books, serious games etc., whose contents can be examined in advance by parents.Footnote 71

The AI-equipped doll is different. It delivers messages to young users, which may include educational content and information, but no parent will read all the 8,000 lines the doll can use or ask to have access to the logic used to match them with children’s statements.

As AI-based devices interact autonomously with children and convey their own cultural values,Footnote 72 this impacts on the rights and duties of parents to provide, in a manner consistent with the evolving capacities of the child, appropriate direction and guidance in the child’s freedom of thought, including aspects concerning cultural diversity.

In terms of risk assessment, the probability (Table 2.2) is medium, considering the limited number of sentences involving a value-oriented statement, and the exposure (Table 2.3) is medium, due to their alignment with values commonly accepted in many cultural contexts. The likelihood is therefore medium (Table 2.4).

Taking into account the nature of the product and its main features (i.e. some value-laden sentences used in dialogue with the young user),Footnote 73 the gravity of prejudice (Table 2.5) can be considered low in the case in question, as the value-laden sentences concern cultural questions that are not particularly controversial. The effort (Table 2.6) can also be considered low, as talking with children can mitigate potential harm. Combining these two values, the severity is therefore low (Table 2.7).

Note that this assessment would be completely altered if the dialogue content were not pre-selected but generated by AI on the basis of information resulting from web searches,Footnote 74 where the potential risk would be much higher.Footnote 75 Similarly, the inclusion in the pre-recorded database of a greater number of value-laden sentences would directly increase the risk.

Considering the likelihood as medium and the severity of the prejudice as low, the overall impact (Table 2.9) is medium.

Right to Psychological and Physical Safety

Connected toys may raise concerns about a range of psychological and physical harms deriving from their use, including access to data and remote control of the toy.Footnote 76 Based on the main features of the product examined, the following questions can be used for this analysis:

  • Can the device put psychological or physical safety at risk?

  • Does the device have adequate data security and cybersecurity measures in place?

  • Can third parties perpetrate malicious attacks that pose a risk to the psychological or physical safety of the user?

As regards the probability, considering the third-party origin of the prejudices and the limited interest in malicious attacks (no business interest, distributed and generic target), but also how easy it is to hack the toy, the probability (Table 2.2) of an adverse impact is medium. Exposure (Table 2.3) is low, given the prevalent use of the device in a supposedly safe environment, such as schools and home, where malicious access and control of the doll is difficult and adult monitoring is more frequent. The likelihood (Table 2.4) is therefore low.

Taking into account the nature of the product examined, the young age of the user, and the potential safety and security risks,Footnote 77 the gravity of prejudice (Table 2.5) can be considered medium. This is because malicious attacks can only be carried out by speech, and no images are collected. Nor can the toy – given its size and characteristics – directly cause physical harm to the user. The effort (Table 2.6) can be considered medium since parent-child dialogue and technical solutions can combat the potential prejudice. The severity (Table 2.7) is therefore medium.

Considering the likelihood as low and the severity of the prejudice as medium, the overall impact is medium (Table 2.9).

2.4.1.3 Results of the Initial Assessment

The following table (Table 2.11) shows the results of the assessment carried out on the initial idea of the connected AI-equipped doll described above:

Table 2.11 Table of envisaged risks for the examined case (L: low, M: medium; H: high; VH: very high)

Based on this table, we can plot a radial graph representing the overall impact on all the affected rights and freedoms. The graph (Fig. 2.2) shows the priority of mitigating potentially adverse impacts on privacy and data protection, followed by risks related to physical integrity and freedom of thought.

Fig. 2.2
figure 2

Source The author

Radial graph (impact) of the examined case.

Fig. 2.3
figure 3

Source The author. [Blue line: original impact. Orange line: final impact after adoption of mitigation measures and design solutions]

Final radial graph of the examined case.

This outcome is confirmed by the history of the actual product, where the biggest concerns of parents and the main reasons for its withdrawal related to personal data and hacking.Footnote 78

2.4.1.4 Mitigation Measures and Re-assessment

Following the iterative assessment, we can imagine that after this initial evaluation of the general idea, further measures are introduced to mitigate the potential risks found. At this stage, the potential rightsholders and stakeholders (users, parents associations, educational bodies, data protection authorities etc.) can make a valuable contribution to better defining the risks and how to tackle them.

While the role of the rightsholders and stakeholders cannot be directly assessed in this analysis, we can assume that their participation would have shown great concern for risks relating to communications privacy and security. This conclusion is supported by the available documentation on the reactions of parents and supervisory authorities in the Hello Barbie case.Footnote 79

After the first assessment and given the evidence on the requests of rightsholders and stakeholders, the following mitigation measures and by-design solutions could have been adopted with respect to the initial prototype.

  1. (A)

    Data protection and the right to privacy

Firstly, the product must comply with the data protection regulation of the countries in which it is distributed.Footnote 80 Given the product’s design, we cannot exclude the processing of personal data. The limited number of sentences provided for use by AI, as in the case of Hello Barbie, does not exclude the provision of unexpected content by the user, including personal information.Footnote 81

Risk mitigation should therefore focus on the topics of conversation between the doll and the young user, and the safeguards in processing information collected from the user.

As regards the first aspect, an effective way to limit the potential risks would be to use a closed set of sentences, excluding phrases and questions that might induce the user to disclose personal information, and making it possible to modify these phrases and questions by the owner of the toy.Footnote 82

Regarding the processing of personal data, the doll’s AI-based information processing functions should be deactivated by default, giving the parents control over its activation.Footnote 83 In addition, to reduce the risk of constant monitoring, deliberate action by the child should be required to activate the doll’s AI-equipped dialogue functions.Footnote 84 This would also help to make users more aware of their interaction with the system and related privacy issues.Footnote 85

Ex post remedies can also be adopted, such as speech detection to remove personal information in recorded data.Footnote 86

Conversations are not monitored, except to support requests from parents. To reduce the impact on the right to privacy and data protection, human review of conversations – to test, improve, or change the technology used – should be avoided, even if specific policies for unexpected findings have been adopted.Footnote 87 Individual testing phases or experiments can be carried out in a laboratory setting or on the basis of user requests (e.g. unexpected reactions and dialogues). This more restrictive approach helps to reduce the impact with respect to the initial design.

Further issues, regarding the information processing architecture and its compliance with data protection principles, concern data storage. This should be minimised and parents given the opportunity to delete stored information.Footnote 88

With regard to the use of collected data, while access to, and sharing of, this information by parentsFootnote 89 are not per se against the interest of the child, caution should be exercised in using this information for marketing purposes. Given the early age of the users and the potentially large amount of information they may provide in their conversation with the doll, plus the lack of active and continuous parental control, the best solution would be not to use child-doll conversations for marketing.Footnote 90

The complexity of data processing activities in the interaction between a child and an AI-equipped doll inevitably affects the form and content of the privacy policies and the options offered to users, as provided by many existing legislations.

A suitable notice and consent mechanism, clear and accessible and legally compliant, is therefore required,Footnote 91 but meeting this obligation is not so simple in the case in question. The nature of the connected toy and the absence of any interface limits awareness of the policies and distances them from direct interaction with the device. This accentuates the perception of the notice and consent mechanism as a mere formality to be completed to access the product.

The last crucial area concerns data security. This entails a negative impact that goes beyond personal data protection and, as such, is also analysed below under impact on the right to psychological and physical safety.

As the AI-based services are hosted by the service provider, data security issues concern both device-service communications and malicious attacks to the server and the device. Encrypted communications, secure communication solutions, and system security requirements for data hosted and processed on the server can minimise potential risks, as in the case study, which also considered access to data when the doll’s user changes.Footnote 92

None of these measures prevent the risks of hacking to the device or the local Wi-Fi connection, which are higher when the doll is used outdoors.Footnote 93 This was the chief weakness noted in the case in question and in IoT devices more generally. They are often designed with poor inherent data security and cybersecure features for cost reasons. To reduce this risk, stronger authentication and encryption solutions have been proposed in the literature.Footnote 94

Taking into account the initial impact assessment plus all the measures described above, the exposure is reduced to low, since users are thus exposed to potential prejudices only in special circumstances, primarily malicious attack. Probability also becomes low, as the proposed measures mitigate the risks relating to dialogue between doll and user, data collection and retention. Likelihood (Table 2.4) is therefore reduced to low.

Regarding severity of prejudice, gravity can be lowered to at least medium by effect of the mitigation measures, but effort remains medium, given the potential risk of hacking. Severity is therefore lowered somewhat (from 5 to 3 in Table 2.7), though remaining medium.

If the severity and the likelihood are medium in Table 2.9, the overall impact is lowered from high to medium.

  1. (B)

    Impact on freedom of thought

As described in Sect. 2.4.1.2, the impact on freedom of thought is related to the values conveyed by the doll in dialogue with the user. Here the main issue concerns the nature of the messages addressed to the user, their sources and their interplay with the rights and duties of parents to provide appropriate direction and guidance in the child’s exercise of freedom of thought, including issues of cultural diversity.

A system based on Natural Language Processing allows AI various degrees of autonomy in identifying the best response or sentence in the human-machine interaction. Given the issues considered here (the nature of the values shared by the doll with its young user) the two main options are to use a closed set of possible sentences or search for potential answers in a large database, such as the Internet. A variety of solutions can also be found between these two extremes.

Since the main problem is content control, the preferable option is the first, and this was indeed the solution adopted in the Hello Barbie case.Footnote 95 Content can thus be fine-tuned to the education level of the user, given the age range of the children.Footnote 96 This reduces the risk of unexpected and inadequate content and, where full lines of dialogue are available (this was the case with Hello Barbie), parents are able to get an idea of the content offered to their children.

Some residual risks remain however, due to intentional or unintentional cultural models or values, including the difference between appropriate and inappropriate content.Footnote 97 This is due to the special relationship the toy generatesFootnote 98 and the only limited mitigation provided by transparency on pre-recorded lines of dialogue.

To address these issues, concerning both freedom of thought and diversity, the AI system should embed a certain degree of flexibility (user-customizable content) and avoid stereotyping by default. To achieve this, the team working on pre-recorded sentences and dialogues should be characterised by diversity, adopting a by-design approach and bearing in mind the target user of the product.Footnote 99

Moreover, taking into account the parents’ point of view, mere transparency, i.e. access to the whole body of sentences used by the doll, is not enough. As is demonstrated extensively in the field of data protection, information on processing is often disregarded by the user and it is hard to imagine parents reading 8,000 lines of dialogue before buying a doll.

To increase transparency and user awareness, therefore, forms of visualisation of these values through logic and content maps could be useful to easily represent the content used. In addition, it would be important to give parents the opportunity to partially shape the AI reactions, customising the values and content, providing other options relating to the most critical areas in terms of education and freedom of thought.

With regard to the effects of these measures, they mitigate both the potentially adverse consequences of initial product design and the lack of parental supervision of content, minimising the probability of an adverse impact on freedom of thought. The probability (Table 2.2) is therefore lowered to low.

Given the wide distribution of the product, the potential variety of cultural contexts and the need for an active role of parents to minimise the risk, the exposure remains medium, although the number of affected individuals is expected to decrease (Table 2.3).

If the probability is low and the exposure is medium, the likelihood (Table 2.4) is lowered to low after the adoption of the suggested mitigation measures and design solutions.

The gravity of prejudice and the effort were originally low and the additional measures described can further reduce gravity through a more responsible management of content which might support potentially conflicting cultural models or values. Severity therefore remains low.

Considering both likelihood and severity as low, the overall impact (Table 2.9) is reduced from medium to low, compared with the original design model.

  1. (C)

    Impact on the right to psychological and physical safety

The potential impact in this area is mainly related to malicious hacking activitiesFootnote 100 that might allow third parties to take control of the doll and use it to cause, psychological and physical harm to the user.Footnote 101 This was one of the most widely debated issues in the Hello Barbie case and one of the main reasons that led Mattel to stop producing this toy.Footnote 102 Possible mitigation measures are the exclusion of interaction with other IoT devices,Footnote 103 strong authentication and data encryption.Footnote 104

As regards likelihood, considering the protection measures adopted and the low interest of third parties in this type of individual and context-specific malicious attack, the probability is low (Table 2.2). Although the suggested measures do not affect the exposure, this remains low due to the limited circumstances in which a malicious attack can be carried out (Table 2.3). The likelihood therefore remains low but is lowered (from 2 to 1 in Table 2.4).

Regarding severity, the proposed measures do not impact on the gravity of the prejudice (Table 2.5), or the effort (Table 2.6) which remain medium. Severity therefore remains medium (Table 2.7).

Since the final values of neither likelihood nor severity change, overall impact remains medium (Table 2.9), with malicious hacking being the most critical aspect of the product in terms of risk mitigation.

The Table 2.12 shows the assessment of the different impacts, comparing the results before and after the adoption of mitigation measures.

Table 2.12 Comparative risk impact analysis table (examined case)

In the case in question, there is no Table 2.10 EF column since there are no factors that could exclude risk, such as certain mandatory impacting features or overriding competing interests recognised by law.

The radial graph in this Fig. 2.3 shows the concrete effect of the assessment (the blue line represents the initial impacts and the orange the impacts after adoption of the measures described above). It should be noted that the reduction of potential impact is limited as the Hello Barbie product already included several options and measures to mitigate adverse effects on rights and freedoms (pre-recorded sentences, no Internet access, data encryption, parental access to stored data, etc.). The effect would have been greater starting from a general AI-equipped doll using Natural Language Processing interacting with children, without mitigation measures.

In this regard, the HRIA model proposed is in line with a human rights-by design approach, where the design team is asked to consider human rights impact from the earliest product design stages, discarding those options that have an obvious negative impact on human rights. With this approach, there is no HRIA 0 where the proposed product is completely open to the riskiest scenarios (e.g. a connected doll equipped with unsupervised AI that uses all available web sources to dialogue with young users, with unencrypted doll-user communication sent to a central datacentre where information is stored without a time limit and used for further purposes, including marketing communications direct to doll users).

In human rights-oriented design, HRIA thus becomes a tool to test, refine and improve adopted options that already entail a risk-aware approach. In this way, HRIA is a tool for testing and improving human rights-oriented design strategies.

2.4.2 A Large-Scale Case Study: Smart City Government

Large-scale projects using data-intensive AI applications are characterised by a variety of potentially impacted areas concerning individual and groups. This produces a more complex and multi-factor scenario which cannot be fully assessed by the mere aggregation of the results of HRIAs conducted for each component of these projects.

An example is provided by data-driven smart cites, where the overall effect of an integrated model including different layers affecting a variety of human activities means that the cumulative impact is greater than the sum of the impacts of each application.

In such cases, a HRIA for AI systems also needs to consider the cumulative effect of data use and the AI strategies adopted, as already happens in HRIA practice with large-scale scenario cases. This is all the more important in the field of AI where large-scale projects often feature a unique or dominant technology partner who benefits from a general overview of all the different processing activities (‘platformisation’Footnote 105).

The Sidewalk project in Toronto is an example of this ‘platformisation’ effect and a case study in the consequent impacts on rights and freedoms. This concluded smart city project was widely debatedFootnote 106 and raised several human rights-related issues common to other data-intensive projects.

The case concerned a requalification project for the Quayside, a large urban area on Toronto’s waterfront largely owned by Toronto Waterfront Revitalization Corporation. Based on an agreement between the City of Toronto and Toronto Waterfront,Footnote 107 in 2017, through a competitive Request for Proposals, Waterfront Toronto hired Sidewalk Labs (a subsidiary of Alphabet Inc.) to develop a proposal for this area.Footnote 108

This proposal – the Master Innovation and Development Plan or MIDPFootnote 109 – outlined a vision for the Quayside site and suggested data-driven innovative solutions across the following areas: mobility and transportation; building forms and construction techniques; core infrastructure development and operations; social service delivery; environmental efficiency and carbon neutrality; climate mitigation strategies; optimisation of open space; data-driven decision making; governance and citizen participation; and regulatory and policy innovation.Footnote 110

This long list of topics shows how this data-intensive project went beyond mere urban requalification to embrace goals that are part of the traditional duties of a local administration, pursuing public interest purposesFootnote 111 with potential impacts on a variety of rights and freedoms.

The Sidewalk caseFootnote 112 suggests several takeaways for the HRIA model. First, an integrated model, which combines the HRIAs of the different technologies and processes adopted within a multi-factor scenario, is essential to properly address the overall impact, including a variety of socio-technical solutions and impacted areas.

Second, the criticism surrounding civic participation in the Sidewalk project reveals how the effective engagement of relevant rightsholders and stakeholders is central from the earliest stages of proposal design. Giving voice to potentially affected groups mitigates the risk of the development of top-down and merely technology driven solutions, which have a higher risk of rejection and negative impact.

Third, the complexity and extent of large-scale integrated HRIA for multi-factor scenarios require a methodological approach that cannot be limited to an internal self-assessment but demand an independent third-party assessment by a multidisciplinary team of experts, as in traditional HRIA practice.

These elements suggest three key principles for large-scale HRIA: independence, transparency, and inclusivity. Independence requires third-party assessors with no legal or material relationship with the entities involved in the projects, including any potential stakeholders.

Transparency concerns both the assessment procedure, facilitating rightsholder and stakeholder participation, and the public availability of the assessment outcome,Footnote 113 using easily understandable language. In this sense, transparency is linked to inclusivity, which concerns the engagement of all the different rightsholders and stakeholders impacted by the activities examined (Table 2.13).

Table 2.13 Multi-factor scenario HRIA: main stages and tasks

An additional important contribution of the integrated HRIA is its ability to shed light on issues that do not emerge in assessing single components of large-scale AI systems, as the cumulative effect of such projects is key. Here, the human rights layer opens up to a broader perspective which includes the impact of socio-technical solutions on democratic participation and decisions.

The Urban Data Trust created by Sidewalk and its role in the Toronto project is an example in this sense. The Urban Data Trust was tasked with establishing “a set of RDU [Responsible Data Use] Guidelines that would apply to all entities seeking to collect or use urban data” and with implementing and managing “a four-step process for approving the responsible collection and use of urban data” and any entity that wishes to collect or use urban data in the district “would have to comply with UDT [Urban Data Trust] requirements, in addition to applicable Canadian privacy laws”.Footnote 118

This important oversight body was to be created by an agreement between Waterfront Toronto and Sidewalk LabFootnote 119 and composed of a board of five members (a data governance, privacy, or intellectual property expert; a community representative; a public-sector representative; an academic representative; and a Canadian business industry representative) acting as a sort of internal review board and supported by a Chief Data Officer who, under the direction of the board, was to carry out crucial activities concerning data use.Footnote 120 In addition, the Urban Data Trust would have to enter into contracts with all entities authorised to collect or use urban dataFootnote 121 in the district, and these data sharing agreements could also “potentially provide the entity with the right to enter onto property and remove sensors and other recording devices if breaches are identified”.Footnote 122

Although this model was later abandoned, due to the concerns raised by this solution,Footnote 123 it shows the intention to create an additional layer of data governance, different from both the individual dimension of information self-determination and the collective dimension of public interest managed by public bodies, within a process of centralisation and privatisation of data governance regarding information generated within a community.Footnote 124

In this sense, the overall impact of AI applications in urban spaces and their coordination by a dominant player providing technological infrastructure raise important questions about the cumulative effect on potentially impacted rights, and even more concerning democracy and the socio-political dimension of the urban landscape,Footnote 125 particularly in terms of the division of public and private responsibilities on matters of collective interest.

This privatisation of the democratic decision process, based on the ‘platformisation’ of the city, directly concerns the use of data, but is no longer just about data protection. In socio-technical contexts, data governance is about human rights in general, insofar as the use of data by different AI applications raises issues about a variety of potentially adverse effects on different rights and freedoms.Footnote 126 If data becomes a means of managing and governing society, its use necessarily has an impact on all the rights and freedoms of individuals and society. This impact is further exacerbated by the empowerment enabled by AI technologies (e.g. the use of facial recognition to replace traditional video-surveillance tools).

For these reasons, cumulative management of different data-intensive systems impacting on the social environment cannot be left to private service providers or an ad hoc associative structure, but should remain within the context of public law, centred on democratic participation in decision-making processes affecting general and public interest.Footnote 127

Large-scale data-intensive AI projects therefore suggest using the HRIA not only to assess the overall impact of all the various AI applications used, but also to go beyond the safeguarding of human rights and freedoms. The results of this assessment therefore become a starting point for a broader analysis and planning of democratic participation in the decision-making process on the use of AI, including democratic oversight on its application.Footnote 128

In line with the approach adopted by international human rights organisations, the human rights dimension should combine with the democratic dimension and the rule of law in guiding the development and deployment of AI projects from their earliest stages.Footnote 129

The findings of the HRIA will therefore also contribute to addressing the so-called ‘Question Zero’ about the desirability of using AI solutions in socio-technical systems. This concerns democratic participation and the freedom of individuals, which are even more important in the case of technological solutions in an urban context, where people often have no real opportunity to opt out due to the solutions being deeply embedded in the structure of the city and its essential services.

A key issue then for the democratic use of AI concerns architecture design and its impact on rights and freedoms. The active role of technology in co-shaping human experiencesFootnote 130 necessarily leads us to focus on the values underlying the technological infrastructure and how these values are transposed into society through technology.Footnote 131 The technology infrastructure cannot be viewed as neutral, but as the result of both the values, intentionally or unintentionally, embedded in the devices/services and the role of mediation played by the different technologies and their applications.Footnote 132

These considerations on the power of designers – which are widely discussed in the debate on technology designFootnote 133 – are accentuated in the context of smart cities and in many large-scale AI systems. Here, the key role of service providers and the ‘platformisation’ of these environmentsFootnote 134 shed light on the part these providers play with respect to the overall impact of the AI systems they manage.

In this scenario, the HRIA can play an important role in assessing values and supporting a human rights-oriented design that also pays attention to participatory processes and democratic deliberation governing large-scale AI systems. This can facilitate the concrete development of a truly trustworthy AI, in which trust is based on respect for human rights, democracy and the rule of law.

2.5 Summary

The recent turn in the debate on AI regulation from ethics to law, the wide application of AI and the new challenges it poses in a variety of fields of human activities are urging legislators to find a paradigm of reference to assess the impacts of AI and to guide its development. This cannot only be done at a general level, on the basis of guiding principles and provisions, but the paradigm must be embedded into the development and deployment of each application.

With a view to providing a global approach in this field, human rights and fundamental freedoms can offer this reference paradigm for a truly human-centred AI. However, this growing interest in a human rights-focused approach needs to be turned into effective tools that can guide AI developers and key AI users, such as municipalities, governments, and private companies.

To bridge this gap with regard to the potential role of human rights in addressing and mitigating AI-related risks, this chapter has suggested a model for human rights impact assessment (HRIA) as part of the broader HRESIA model. This is a response to the lack of a formal methodology to facilitate an ex-ante approach based on a human-oriented design of product/service development.

The proposed HRIA model for AI has been developed in line with the existing practices in human rights impacts assessment, but in a way that better responds to the specific nature of AI applications, in terms of scale, impacted rights and freedoms, prior assessment of production design, and assessment of risk levels, as required by several proposals on AI regulation.Footnote 135

The result is a tool that can be easily used by entities involved in AI development from the outset’ in the design of new AI solutions, and can follow the product/service throughout its lifecycle. This assessment model provides specific, measurable and comparable evidence on potential impacts, their probability, extension, and severity, facilitating comparison between alternative design options and an iterative approach to AI design, based on risk assessment and mitigation.

In this sense, the proposed human rights module of the HRESIA is no longer just an assessment tool but a human rights management tool, providing clear evidence for a human rights-oriented development of AI products and services and their risk management.

In addition, a more transparent and easy-to-understand impact assessment model facilitates a participatory approach to AI development by rightsholders and potential stakeholders, giving them clear and structured information about possible options and the effects of changes in AI design, and contributing to the development of the ethical and social components of the HRESIA.Footnote 136

Finally, the proposed model can also be used by supervisory authorities and auditing bodies to monitor risk management in relation to the impact of data use on individual rights and freedoms.

Based on these results, several conclusions can be drawn. The first general one is that conducting a HRIA should be seen not as a burden or a mere obligation, but as an opportunity. Given the nature of AI products/services and their features and scale, the proposed assessment model can significantly help companies and other entities to develop effective human-centric AI in challenging contexts.

The model can also contribute to a more formal and standardised assessment of AI solutions, facilitating the decision between different possible approaches. Although HRIA has already been adopted in several contexts, large-scale projects are often assessed without using a formal evaluation of risk likelihood and severity.Footnote 137 Traditional HRIA reports often describe the risks found and their potential impact, but with no quantitative assessment, providing recommendations without grading the level of impact, leaving duty bearers to define a proper action plan.

This approach to HRIA is in line with voluntary and policy-based HRIA practice in the business sector. However, once HRIA becomes a legal tool – as suggested by the European Commission and the Council of EuropeFootnote 138 –, it is no longer merely a source of recommendations for better business policy. Future AI regulation will most likely bring specific legal obligations and sanctions for non-compliance in relation to risk assessment and management, as well as specific risk thresholds (e.g. high risk).

Analysis of potential impact will therefore become an element of regulatory compliance, with mandatory adoption of appropriate mitigation measures, and barriers in the event of high risk. A model that enables a graduation of risk can therefore facilitate compliance and reduce risks by preventing high-risk AI applications from being placed on the market.

With large-scale projects, such as smart cities, assessing each technological component using the proposed model and mitigating adverse effects is not sufficient. A more general overall analysis must be conducted in addition. Only an integrated assessment can consider the cumulative effect of a socio-technical systemFootnote 139 by measuring its broader impacts, including the consequences in terms of democratic participation and decision-making processes.

This integrated assessment, based on broader fieldwork, citizen engagement, and a co-design process, can evaluate the overall impact of an entire AI-based environment, in a way that is closer to traditional HRIA models.

In both cases, figures such as the human rights officer and tools like a HRIA management plan, containing action plans with timelines, responsibilities and indicators, can facilitate these processes,Footnote 140 including the possibility of extending them to the supply chain and all potentially affected groups of people.

Finally, the proposed model for the human rights component of the HRESIA model, with its more formalised assessment, can facilitate the accountability and monitoring of AI products and services during their lifecycle,Footnote 141 enabling changes in their impacts to be monitored through periodic reviews, audits, and progress reports on the implementation of the measures taken. It also makes it possible to incorporate more precise human rights indicators in internal reports and plans and make assessment results available to rightsholders and stakeholders clearly and understandably, facilitating their cooperation in a human rights-oriented approach to AI.