4.1 Regulating AI: Three Different Approaches to Regulation

In its early stages, the regulatory debate on AI focused mainly on the ethical dimension of data use and the new challenges posed by data-intensive systems based on Big Data and AI. This approach was supported by several players of the AI industry, probably attracted by the flexibility of a self-regulation based on ethical principles, which is less onerous and easier to align with corporate values.Footnote 1

As in the past, uncertainty about the potential impact of new technology and an existing legal framework not tailored to the new socio-technical scenarios was the main reason for rule makers to turn their gaze towards general principles and common ethical values.

The European Data Protection Supervisor (EDPS) was the first body to emphasise the ethical dimension of data use, pointing out how, in light of recent technological developments, data protection appeared insufficient to address all the challenges, while ethics “allows this return to the spirit of the [data protection] law and offers other insights for conducting an analysis of digital society, such as its collective ethos, its claims to social justice, democracy and personal freedom”.Footnote 2

This ethical turn was justified by the broader effects of data-intensive technologies in terms of social and ethical impacts, including the collective dimension of data use.Footnote 3 In the same vein, the European Commission set up a high-level group focusing on ethical issues.Footnote 4 This ethical wave later resulted in a flourishing of ethical principles, codes and ethical boards in private companies.Footnote 5

This new focus, which also presented the danger of ‘ethics-washing’,Footnote 6 had the merit of shedding light on basic questions of the social acceptability of highly invasive predictive AI. Such systems may be legally compliant, while at the same time raising crucial questions about the society we want to create, in terms of technological determinism, distribution of power, inclusiveness and equality.

But the ethical debate frequently addressed challenging questions within a rather blurred theoretical framework, with the result that ethical principles were sometimes confused with fundamental rights and freedoms, or principles that were already part of the human rights framework were simply renamed.

A rebalancing of the debate has come from the different approach of the Council of Europe, which has remained focused on its traditional human rights-centred mission,Footnote 7 and the change of direction of the European Commission with a new bundle of proposals for AI regulation.Footnote 8 These bodies do not marginalise the role of ethics, but see moral and social values as complementary to a strategy based on legal provisions and centred on risk management and human rights.Footnote 9

There are three possible approaches to grounding future AI regulation on human rights, which differ depending on the context in which they are placed – international or EU – and their focus.

The first is the principles-based approach, designed mainly for an international context characterised by a variety of national regulations. Here a set of key principles is clearly needed to provide a common framework for AI regulation at the regional or global level.

The second approach, also designed for the international context, is more focused on risk management and safeguarding individual rights. This approach taken by the Council of Europe, can be complementary to the first one, where the former sets out the key principles and the latter contextualises human rights and freedoms in relation to AI by adding rights-based risk management.

The third approach, embodied by the EU proposal on AI regulation, puts a greater emphasis on (high) risk management in terms of product safety and a conformity assessment. Here the regulatory strategy on AI is centred on a predefined risk classification, a combination of safety and rights protections and standardised processes.

These three models therefore offer a range of options, from a general principles-based approach to a more industry-friendly regulation centred on a conformity assessment of high-risk AI systems. Despite these differences, human rights remain a key element of all of them, though with significant distinctions in emphasis.

All these models also adopt the same co-regulation schema combining hard law provisions with soft-law instruments. This gives the framework flexibility in a field characterised by the rapid evolution of technology and emergence of new issues, while also giving space to sector-specific challenges and bottom-up initiatives.

The HRESIA framework can contribute to all three models by providing a human rights-centred perspective and bridging the two phases of the AI debate by combining a legal framework that takes into account ethical and societal issues with an operational focus that is often absent in the current proposals.

4.2 The Principles-Based Approach

The starting point in identifying the guiding principles that, from a human rights perspective, should underpin future AI regulation is to analyse the existing international legally binding instruments that necessarily represent the general framework in this field. This includes a gap analysis to ascertain the extent to which the current regulatory framework and its values properly address the new issues raised by AI.

Moreover, a principles-based approach focusing on human rights has to consider the state of the art with a view to preserving the harmonisation of the human rights framework, while introducing coherent new AI-specific provisions.

This principles-based approach consists in a targeted intervention, as it focuses on the changes AI will bring to society and not on reshaping every area where AI can be applied. The identification of key principles for AI builds on existing binding instruments and the contextualisation of their guiding principles.

Both the existing binding instruments and the related non-binding implementations – which in some cases already contemplate the new AI scenario – must be considered. This is based on the assumption that the general principles provided by international human rights instruments should underpin all human activities, including AI-based innovation.Footnote 10

Defining key principles for the future regulation of AI through analysis of the existing legal framework requires a deductive methodology, extracting these principles from the range of regulations governing the fields in which AI solutions may be adopted. Two different approaches are possible to achieve this goal: a theoretical rights-focused approach and a field-focused approach based on the provisions set out in existing legal instruments.

In the first case, the various rights enshrined in human rights legal instruments are considered independently and in their abstract notion,Footnote 11 looking at how AI might affect their exercise. In the second, the focus shifts to the legal instruments themselves and areas they cover, to assess their adequacy in responding to the challenges that AI poses in each sector, from heath to justice.

From a regulatory perspective, and with a view to a future AI regulation, building on a theoretical elaboration of individual rights may be more difficult as it entails a potential overlap with the existing legal instruments and may not properly deal with the sectoral elaboration of such rights. On the other hand, a focus on legal instruments and their implementation can facilitate better harmonisation of new provisions on AI within the context of existing rules and binding instruments.

Once the guiding principles have been identified, they should be contextualised within the scenario transformed by AI, which in many cases requires their adaptation. The principles remain valid, but their implementation must be reconsidered in light of the social and technical changes due to AI.Footnote 12 This delivers a more precise and granular application of these principles so that they can provide a concrete contribution to the shape of future AI regulation.

This principles-based approach requires a vertical analysis of the key principles in each of the fields regulated by international instruments, followed by a second phase considering the similarities and common elements across all fields. Ultimately, such an approach should valorise the individual human rights, but departing from the existing legal framework and not from an abstract theoretical notion of each right and freedom.

As the existing international instruments are sector-specific and not rights-based, the focus of the initial analysis is on thematic areas and then a set of guiding principles common to all areas is developed. These shared principles can serve as the cornerstone for a common core of future AI provisions.

A key element in this process is the contextualisation of the guiding principles and legal values, taking advantage of the non-binding instruments which provide granular applications of the principles enshrined in the binding instruments.

AI technologies have an impact on a variety of sectorsFootnote 13 and raise issues relating to a large body of regulatory instruments. However, from a methodological point of view, a possible principles-based approach to AI regulation can be validated by selecting a few key areas where the impact of AI on individuals and society is particularly marked and the challenges are significant. This is the case for data protection and healthcare.

The intersection between these two realms is interesting in view of future AI regulation, given the large number of AI applications concerning healthcare data and the common ground between the two fields. This is reflected in several provisions of international binding instruments,Footnote 14 as well as non-binding instruments.Footnote 15 Individual self-determination also plays a central role in both these fields, and the challenges of AI – in terms of the complexity and opacity of medical treatments and data processing operations – are therefore particularly relevant and share common concerns.

4.2.1 Key Principles from Personal Data Regulation

Over the past decade, the international regulatory framework in the field of data protection has seen significant renewal. Legal instruments shaped by principles defined in the 1970s and 1980s no longer responded to the changed socio-technical landscape created by the increasing availability of bandwidth for data transfer, data storage and computational resources (cloud computing), the progressive datafication of large parts of our life and environment (The Internet of Things, IoT), and large-scale and predictive data analysis based on Big Data and Machine Learning.

In Europe the main responses to this change have been the modernised version of Convention 108 (Convention 108+) and the GDPR. A similar redefinition of the regulatory framework has occurred, or is ongoing, in other international contexts – such as the OECDFootnote 16 – or in individual countries.

However, given the rapid development of the last wave of AI, these new measures fail to directly address some AI-specific challenges and several non-binding instruments have been adopted to bridge this gap, as well as future regulatory strategies under discussion.Footnote 17 This section examines the following data-related international non-binding legal instruments: Council of Europe, Guidelines on Artificial Intelligence and Data Protection [GAI];Footnote 18 Council of Europe, Guidelines on the protection of individuals with regard to the processing of personal data in a world of Big Data [GBD];Footnote 19 Recommendation CM/Rec(2019)2 of the Committee of Ministers of the Council of Europe to member States on the protection of health-related data [CM/Rec(2019)2];Footnote 20 Recommendation CM/Rec(2010)13 of the Committee of Ministers of the Council of Europe to member States on the protection of individuals with regard to automatic processing of personal data in the context of profiling [CM/Rec(2010)13]; UNESCO, Preliminary Study on a Possible Standard-Setting Instrument on the Ethics of Artificial Intelligence, 2019 [UNESCO 2019];Footnote 21 OECD, Recommendation of the Council on Artificial Intelligence, 2019 [OECD];Footnote 22 40th International Conference of Data Protection and Privacy Commissioners, Declaration on Ethics and Data Protection in Artificial Intelligence, 2018 [ICDPPC].Footnote 23,Footnote 24

These instruments differ in nature: while some instruments define specific requirements and provisions, others are mainly principles-based instruments setting out certain guidelines but without, or only partially, providing more detailed rules.

Based on these instruments and focusing on those provisions that are most pertinent to AI issues,Footnote 25 it is possible to identify several general guiding principles which are then contextualised with respect to AI. Several of these principles can be extended to non-personal data, mainly in regard to the impact of its use (e.g. aggregated data) on individual and groups in decision-making processes.

A first group of principles (the primacy of the human being, human control and oversight, participation and democratic oversight) concerns the relationship between humans and technology, granting the former – either as individuals or social groups – control over technological development, in particular regarding AI.

To refine the key requirements enabling human control over AI and support human rights-oriented development, we can identify a second set of principles focussed on the following areas: transparency, risk management, accountability, data quality, the role of experts and algorithm vigilance.

Finally, the binding and non-binding international instruments reveal a further group of more general principles concerning AI development that go beyond data protection. These include rules on interoperability between AI systems,Footnote 26 as well as digital literacy, education and professional training.Footnote 27 Primacy of the Human Being

Although this principle is only explicitly enshrined in the Oviedo Convention and not in the binding international instruments on data protection, such as Convention 108 and 108+, the primacy of the human being is an implicit reference when data is used in the context of innovative technologies.Footnote 28 This is reflected in the idea that data processing operations must “serve the data subject”.Footnote 29 More generally, the primacy of the human being over science is a direct corollary of the principle of respect for human dignity.Footnote 30 Dignity is a constitutive element of the European approach to data processing,Footnote 31 and of the international approach to civil and political rights in general.Footnote 32 Wider reference to human dignity can also be found in the non-binding instruments focused on AI.Footnote 33

In affirming the primacy of the human being within the context of artificial intelligence, AI systems must be designed to serve mankind and the creation, development and use of these systems must fully respect human rights, democracy and the rule of law. Human Control and Oversight

Since the notion of data protection originally rested on the idea of control over use of information in information and communication technology and the first data protection regulations were designed to give individuals some counter-control over the data that was collected,Footnote 34 human control plays a central role in this area. It is also related to the importance of self-determinationFootnote 35 in the general theory of personality rights and the importance of human oversight in automated data processing.

Moreover, in the field of law and technology, human control plays an important role in terms of risk management and liability. Human control over potentially harmful technology applications ensures a degree of safeguard against the possible adverse consequences for human rights and freedoms.

Human control is thus seen as critical from a variety of perspectives – as borne out by both Convention 108+Footnote 36 and the non-binding instruments on AIFootnote 37 – and it also encompasses human oversight on decision-making processes delegated to AI systems. Several guiding principles for future AI regulation can therefore be discerned in the instruments examined.

By contextualising human control and oversight with regard to AI applications, these applications should allow meaningfulFootnote 38 control by human beings over their effects on individuals and society. Moreover, AI products and services must be designed in such a way to grant individuals the right not to be subject to a decision which significantly affects them taken solely on the basis of automated data processing, without having their views taken into consideration. In short, Al products and services must allow general human control over them.Footnote 39

Finally, the role of human intervention in AI-based decision-making processes and the freedom of human decision makers not to rely on the result of the recommendations provided using AI should be preserved.Footnote 40 Participation and Democratic Oversight on AI Development

Turning to the collective dimension of the use of data in AI,Footnote 41 human control and oversight cannot be limited to supervisory entities, data controllers or data subjects. Participatory and democratic oversight procedure should give voice to society at large, including various categories of people, minorities and underrepresented groups.Footnote 42 This supports the notion that participation in decision-making serves to advance human rights and is crucially important in bringing specific issues to the attention of the public authorities.Footnote 43

Since human control over potentially hazardous technology entails a risk assessment,Footnote 44 this assessment should also adopt a participatory approach. Adopting this approach in the context of AI, participatory forms of risk assessment should be developed with the active engagement of the individuals and groups potentially affected. Individuals, groups, and other stakeholders should therefore be informed and actively involved in the debate on what role AI should play in shaping social dynamics, and in the decision-making processes affecting them.Footnote 45

Derogations may be introduced in the public interest, where proportionate in a democratic society and with adequate safeguards. In this regard, in policing, intelligence, and security, where public oversight is limited, governments should report regularly on their use of AI.Footnote 46 Transparency and Intelligibility

Transparency is a challengingFootnote 47 and highly debated topic in the context of AI,Footnote 48 with several different interpretations, including the studies on ‘Explainable AI’. In this sense, it is one of the data protection principles that is stressed most frequently.Footnote 49

But effective transparency is mired by complex analysis processes, non-deterministic models, and the dynamic nature of many algorithms. Furthermore, solutions such as the right to explanation focus on decisions affecting specific persons, while the problems of collective use of AI at group levelFootnote 50 remain unaddressed.

In any case, none of these points diminishes the argument for the central role of transparency and AI intelligibility in safeguarding individual and collective self-determination. This is truer still in the public sector, where the limited variability of algorithms (ensuring equality of treatment and uniform public procurement procedures) can afford greater transparency levels.

In the AI context, every individual must therefore have the right to be properly informed when interacting directly with an AI system and to receive adequate and easy-to-understand information on its purpose and effects, including the existence of automated decisions. This information is necessary to enable overall human control over such systems, to verify alignment with individuals’ expectations and to enable those adversely affected by an AI system to challenge its outcome.Footnote 51 Every individual should also have a right to obtain, on request, knowledge of the reasoning underlying any AI-based decision-making process where the results of such process are applied to him or her.Footnote 52

Finally, to foster transparency and intelligibility, governments should promote scientific research on explainable AI and best practices for transparency and auditability of AI systems.Footnote 53 Precautionary Approach and Risk Management

Regarding the potentially adverse consequences of technology in general, it is important to make a distinction between cases in which the outcome is known with a certain probability and those where it is unknown (uncertainty). Since building prediction models for uncertain consequences is difficult, we must assume that “uncertainty and risk are defined as two mutually exclusive concepts”.Footnote 54

Where there is scientific uncertainty about the potential outcome, a precautionary approachFootnote 55 should be taken, rather than conducting a risk analysis.Footnote 56 The same conclusion can be drawn for AI where the potential risks of an AI application are unknown or uncertain.Footnote 57 In all other cases, AI developers, manufacturers and service providers should assess and document the possible adverse consequences of their work for human rights and fundamental freedoms, and adopt appropriate risk prevention and mitigation measures from the design phase (human rights by-design approach) and throughout the lifecycle of AI products and services.Footnote 58

The development of AI raises specific forms of risk in the field of data protection. One widely discussed example is that of re-identification,Footnote 59 while the risk of de-contextualisation is less well known. In the latter case, data-intensive AI applications may ignore contextual information needed to understand and apply the proposed solution. De-contextualisation can also impact the choice of algorithmic models, re-using them without prior assessment in different contexts and for different purposes, or using models trained on historical data of a different population.Footnote 60

The adverse consequences of AI development and deployment should therefore include those that are due to the use of de-contextualised data and de-contextualised algorithmic models.Footnote 61 Suitable measures should also be introduced to guard against the possibility that anonymous and aggregated data may result in the re-identification of the data subjects.Footnote 62

Finally, Convention 108+ (like the GDPR) adopts a two-stage approach to risk: an initial self-assessment is followed by a consultation with the competent supervisory authority if there is residual high risk. A similar model can be extended to AI-related risks.Footnote 63 AI developers, manufacturers, and service providers should consult a competent supervisory authority where AI applications have the potential to significantly impact the human rights and fundamental freedoms of individuals.Footnote 64 Accountability

The principle of accountability is recognised in Convention 108+Footnote 65 and is more generally considered as a key element of risk management policy. In the context of AI,Footnote 66 it is important to stress that human accountability cannot be hidden behind the machine. Although AI generates more complicated scenarios,Footnote 67 this does not exclude accountability and responsibility of the various human actors involved in the design, development, deployment and use of AI.Footnote 68

From this follows the principle that the automated nature of any decision made by an AI system does not exempt its developers, manufacturers, service providers, owners and managers from responsibility and accountability for the effects and consequences of the decision. Data Minimisation and Data Quality

Data-intensive applications, such as Big Data analytics and AI, require a large amount of data to produce useful results, and this poses significant challenges for the data minimisation principle.Footnote 69 Furthermore, the data must be gathered according to effective data quality criteria to prevent potential bias, since the consequences for rights and freedoms can be critical.Footnote 70

In the context of AI, this means that developers are required to assess the nature and amount of data used (data quality) and minimise the presence of redundant or marginal dataFootnote 71 during the development and training phases, then monitoring the model’s accuracy as it is fed with new data.Footnote 72

AI development and deployment should avoid any potential bias, including unintentional or hidden, and critically assess the quality, nature, origin and amount of personal data used, limiting unnecessary, redundant or marginal data, and monitoring the model’s accuracy.Footnote 73 Role of Experts and Participation

The complex potential impacts of AI solutions on individuals and society demand that AI development process cannot be delegated to technicians alone. The role of experts from various domains was highlighted in the first non-binding document on AI and data protection, suggesting AI developers, manufacturers and service providers set up and consult independent committees of experts from a range of fields, and engage with independent academic institutions, which can help in the design of human rights-based AI applications.Footnote 74 Participatory forms of AI development, based on the active engagement of the individuals and groups potentially affected by AI applications, should also been encouraged.Footnote 75 Algorithm Vigilance

The existing supervisory authorities (e.g. data protection authorities, communication authorities, antitrust authorities, etc.) and the various stakeholders involved in the development and deployment of AI solutions should both adopt forms of algorithm vigilance to react quickly in the event of unexpected and hazardous outcomes.Footnote 76

AI developers, manufacturers, and service providers should therefore implement algorithm vigilance by promoting the accountability of all relevant stakeholders, assessing and documenting the expected impacts on individuals and society in each phase of the AI system lifecycle on a continuous basis, so as to ensure compliance with human rights.Footnote 77 Cooperation should be encouraged in this regard between different supervisory authorities having competence for AI.Footnote 78

4.2.2 Key Principles from Biomedicine Regulation

Compared with data protection, international legal instruments on health protection provide a more limited and sector-specific contribution to the draft of future AI regulation. While data is a core component of AI, such that several principles can be derived from international instruments of data protection, healthcare is simply one of many sectors in which AI can be applied. This entails a dual process of contextualisation: (i) some principles stated in the field of data protection can be further elaborated upon with regard to biomedicine; (ii) new principles must be introduced to better address the specific challenges of AI in the sector.

Starting with the Universal Declaration of Human Rights, several international binding instruments include provisions concerning health protection.Footnote 79 Among them, the International Covenant on Economic, Social and Cultural Rights, the European Convention on Human Rights, Convention 108+ and the European Social Charter, all lay down several general provisions on health protection and related rights.Footnote 80 Provisions and principles already set out in other general instruments have a more sector-specific contextualisation in the Universal Declaration on Bioethics and Human Rights (UNESCO) and the Oviedo ConventionFootnote 81 (Council of Europe).

The Oviedo Convention – the only multilateral binding instrument entirely focused on biomedicine – and its additional protocols is the main source to identify the key principles in this field,Footnote 82 which require further elaboration to be applied to AI regulation. The Convention is complemented by two non-binding instruments: the Recommendation on health dataFootnote 83 and the Recommendation on research on biological materials of human origin.Footnote 84 The former illustrates the close links between biomedicine (and healthcare more generally) and data processing.

Although the Universal Declaration on Bioethics and Human Rights and the Oviedo Convention – including the related non-binding instruments –, were adopted in a pre-AI era, they provide specific safeguards regarding self-determination, human genome treatments, and research involving human beings, which are unaffected by AI application in this field and require no changes.

However, self-determination in the area of biomedicine faces the same challenges as already discussed for data processing. Notwithstanding the different nature of consent to medical treatment and to data processing, the high degree of complexity and, in several cases, obscurity in AI applications can often undermine the effective exercise of individual autonomy in both cases.Footnote 85

Against this background, the main contribution of the binding international instruments in the field of biomedicine does not concern the sector-specific safeguards they provide, but consists in the important set of general principles and values that can be extrapolated from them to form a building block of future AI regulation.

The key principles can be identified in relation to the following nine areas: primacy of the human being, equitable access, acceptability, the principle of beneficence, private life and right to information, professional standards, non-discrimination, the role of experts, and public debate. This contribution goes beyond biomedicine since several provisions, centred on an appropriate balance between technology and human rights, can be extended to AI in general and contextualised in this field, as explained in the following analysis.Footnote 86 Primacy of the Human Being

In a geo-political and economic context characterised by competitive AI development, the primacy of the human being must be affirmed as a key element in the human rights-oriented approach:Footnote 87 the drive for better performance and efficiency in AI-based systems cannot override the interests and welfare of human beings.

This principle must apply to both the development and use of AI systems (e.g. ruling out systems that violate human rights and freedoms or that have been developed in violation of them). Equitable Access to Health Care

The principle of equitable access to healthcare,Footnote 88 should be extended to the benefits of AI,Footnote 89 especially considering the increasing use of AI in the healthcare sector. This means taking appropriate measures to combat the digital divide, discrimination, marginalisation of vulnerable persons or cultural minorities, and limited access to information. Acceptability

Based on Article 12 of the International Covenant on Economic, Social and Cultural Rights, the Committee on Economic, Social and Cultural Rights clarified the notion of acceptability, declaring that all health facilities, goods and services must “be respectful of medical ethics and culturally appropriate”.Footnote 90 Given the potentially high impact of AI-based solutions on society and groups,Footnote 91 acceptability is also a key factor in AI development, as demonstrated by the emphasis on the ethical and cultural dimension found in some non-binding instruments.Footnote 92 Principle of Beneficence

Respect for the principle of beneficence in biomedicine and bioethics and human rightsFootnote 93 should be seen as a requirement where, as mentioned above, the complexity or opacity of AI-based treatments places limitations on individual consent which cannot therefore be the exclusive basis for intervention. In such cases, the best interest of the person concerned should be the main criterion in the use of AI applications.Footnote 94 Private Life and Right to Information

In line with the considerations expressed earlier on data protection, the safeguards concerning self-determination with regard to private life and the right to information already recognised in the field of medicineFootnote 95 could be extended to AI regulation.

With specific reference to the bidirectional right to information about health, AI health applications must guarantee the right to information and respect the wishes of individuals not to be informed, unless compliance with an individual’s wish not to be informed entails a serious risk to the health of others.Footnote 96 Professional Standards

Professional standards are a key factor in biomedicine,Footnote 97 given the potential impacts on individual rights and freedoms. Similarly, AI development involves several areas of expertise, each with its own professional obligations and standards, which must be met where the development of AI systems can affect individuals and society.

Professional skills requirements must be based on the current state of the art. Governments should encourage professional training to raise awareness and understanding of AI and its potential effects on individuals and society, as well as supporting research into human rights-oriented AI. Non-discrimination

The principle of non-discriminationFootnote 98 and non-stigmatisation in the field of biomedicine and bioethicsFootnote 99 should be complemented by ruling out any form of discrimination against a person or group based on predictions of future health conditions.Footnote 100 Role of Experts

The expertise of ethics committees in the field of biomedicineFootnote 101 should be called upon to provide independent, multidisciplinary and pluralist committees of experts in the assessment of AI applications.Footnote 102 Public Debate

As with biomedicine,Footnote 103 fundamental questions raised by AI development should be exposed to proper public scrutiny as to the crucial social, economic, ethical and legal implications, and their application subject to consultation.

Examination of the above key areas demonstrates that the current legal framework on biomedicine can provide important principles and elements to be extended to future AI regulation, beyond the biomedicine sector. However, four particular shortcomings created by the impact of AI remain unresolved, or only partially addressed, and should be further discussed:

  1. (a)

    Decision-making Systems

    In recent years a growing number of AI applications have been developed for medical diagnosis, using data analytics and ML solutions. Large-scale data pools and predictive analytics are used to try and arrive at clinical solutions based on available knowledge and practices. ML applications in image recognition may provide increased cancer detection capability. Likewise, in precision medicine, large-scale collection and analysis of multiple data sources (medical as well as non-medical data, such as air and housing quality) are used to develop personalised responses to health and disease.

    The use of clinical data, medical records and practices, as well as non-medical data, is not in itself new in medicine and public health studies. However, the scale of data collection, the granularity of the information gathered, the complexity (and in some cases opacity) of data processing, and the predictive nature of the results raise concerns about the potential fragility of decision-making systems.

    Most of these issues are not limited to the health sector, as potential biases (including lack of diversity and the exclusion of outliers and smaller populations), data quality, de-contextualisation, context-based data labelling and the re-use of dataFootnote 104 are common to many AI applications and concern data in general. Existing guidance in the field of data protectionFootnote 105 can therefore be applied here too and the data quality aspects extended to non-personal data.

  2. (b)


    The opacity of AI applications and the transformative use of data in large-scale data analysis undermine the traditional notion of consent in both data processingFootnote 106 and medical treatment. New schemes could be adopted, such as broadFootnote 107 or dynamic consent,Footnote 108 which however – at the present state of the art – would only partially address this problem.

  3. (c)

    The Doctor-Patient Relationship

    There are several factors in AI-based diagnosis – such as the loss of knowledge that cannot be encoded in data,Footnote 109 over-reliance on AI in medical decisions, the effects of local practices on training datasets, and potential deskilling in the healthcare sectorFootnote 110 – that might affect the doctor-patient relationshipFootnote 111 and need to be evaluated carefully before adoption.

  4. (d)

    Risk Management

    The medical device industry has already developed risk-based regulatory models, such as Regulation (EU) 2017/745 – based on progressive safeguards according to the class of risk of each device –, which could be generalised for the future AI regulation focusing on the impact on human rights and fundamental freedoms. However, a risk-based classification of AI by law is complicated, given its variety and different fields of application.Footnote 112

4.2.3 A Contribution to a Future Principles-Based Regulation of AI

Based on the analysis of two key areas of AI application, the principles-based approach has revealed how it is possible to define future AI regulation by focusing on a set of guiding principles developed in a way consistent with the existing international human rights framework and reaffirming the central role of human dignity and human rights in AI, where machine-driven solutions risk dehumanising individuals.Footnote 113

The principle-based methodological process, consisting of analysis (mapping and identification of key principles) and contextualisation, has proven its merit in the areas examined, with the development of several key principles. Correlations and a common ground between these principles have been identified facilitating their harmonisation, while other principles represent the unique contributions of each sector to future AI regulation.

The table below (Table 4.1) summarises these findings and the level of harmonisation in these two areas and, notwithstanding the limitations of the scope of this analysis, shows how its results validate the principles-based methodology as a possible scenario for future AI regulation.

Table 4.1 Key principles in Data and Health (AI regulation)

4.3 From Design to Law – The European Approaches and the Regulatory Paradox

In previous sections we have seen how the future regulation of AI could be based on existing international principles. We can carry out a similar exercise with respect to EU law, where similar principles are recognised, though in the presence of a wider variety of binding instruments, owing to the EU’s broader field of action.

Rather than adopt the principles-based methodology described, neither the EU legislator nor the Council of Europe decided to follow this path. Both European legislators abandoned the idea of setting common funding principles for AI development and opted for a different and more minimalist approach with a greater emphasis on risk prevention.

While the focus on risk is crucial and in line with the HRESIA, there is something of a regulatory paradox in Europe’s approach to AI. An attempt to provide guiding principles was made through ethical guidelines – such as those drafted by the HLEGAIFootnote 114 –, vesting legal principles in ethical requirements. On the other hand, recent regulatory proposals based on binding instruments have preferred not to provide a framework of principles but focus on specific issues such as banning applications, risk management and conformity assessment.

This is a regulatory paradox, where general legal principles are set out in ethical guidelines while the actual legal provisions lack a comprehensive framework. Although this is more pronounced in Brussels than in Strasbourg, concerns at a European level about the impact of AI regulation on competition and the weakness of the AI industry in Europe appear to take precedence over far-reaching regulation.

Such concerns have restricted measures to high-risk applications,Footnote 115 leaving aside a broader discussion of the role of AI in society and citizen participation in AI project development. This bears similarities with what we witnessed with the first generation of data protection law in Europe in the 1960’s, where the principle concern was risk and the need to provide safeguards against the danger of a database society.Footnote 116 Only in later waves of legislation was a more sophisticated framework established with reference to general principles, fundamental rights, and comprehensive regulation of data processing. A similar path could be foreseen for AI and here a principles-based methodology described above might figure in more extensive regulation to resolve the present paradox.

The two European legislators also display further similarities in their approach to co-regulation – combining hard and soft law –, setting red lines on the most harmful AI applications, and oversight procedures.

Finally, neither of the proposals seem oriented towards the creation of a new set of rights specifically tailored to AI. This decision is important since the contextualisation of existing rights and freedoms can often provide adequate safeguards, while some proposals for new generic rights – such as the right to digital identity – rest on notions that are still in their infancy, and not mature enough to be enshrined in a legal instrument.

Against these similarities between the two European initiatives, differences necessarily remain, given the distinct institutional and political remits of the Council of Europe and the European Union: the Council’s more variable political and regulatory situation, compared with the EU; the different goals of the two entities, one focused on human rights, democracy and the rule of law, and the other on the internal market and more detailed regulation; the different status of the Council of Europe’s international instruments, which are addressed to Member States, and the EU’s regulations which are directly applicable in all Member States; and – not least – the business interests and pressures which are inevitably more acute for the European Union given the immediate impact of EU regulation on business.

Having described the key features of Europe’s approach, we can go on to discuss the main ways in which it deals with AI risk. After looking at the framing of the relationship between the perceived risks of AI and the safeguarding of human rights in Strasbourg and Brussels, we will examine the possible contribution of the HRESIA model to future regulation.

4.3.1 The Council of Europe’s Risk-Based Approach Centred on Human Rights, Democracy and Rule of Law

On 11 September 2019, during its 1353rd meeting, the Committee of Ministers of the Council of Europe set up the Ad hoc Committee on Artificial Intelligence (CAHAI), mandated to examine the feasibility and potential elements of a legal framework for the development, design and application of AI based on the Council of Europe’s standards on human rights, democracy and the rule of law.Footnote 117 This was the fruit of several ongoing AI initiatives in different branches of the Council of Europe, which had already led to the adoption of important documents in specific sectors.Footnote 118

The CAHAI mandate also confirmed the Council of Europe’s focus on legal instruments and its disinclination to regulate AI on the basis of ethical principles.Footnote 119 In this sense, the Council of Europe anticipated the EU’s turn towards legislation.

After a preliminary study of the most important international and national legal frameworks and ethical guidelines, and an analysis of the risks and opportunities of AI for human rights, democracy and the rule of law,Footnote 120 the CAHAI conducted a Feasibility Study on the development of a horizontal cross-cutting regulatory frameworkFootnote 121 on the use and effects of AI (plus policy tools, such as impact assessment models) which might also include a sectoral approach.Footnote 122 The Feasibility Study gives a general overview of the key issues and describes the CAHAI’s main directions of travel towards a legal framework and policy instruments.

The approach outlined in the Feasibility Study is based on recognition that the existing human rights legal framework already provides guiding principles and provisions that can be applied to AI.Footnote 123 These need to be better contextualised in light of the changes to society brought by AIFootnote 124 to fill three perceived gaps in the legal landscape: (i) the need to move from general principles to AI-centred implementation; (ii) the adoption of specific provisions on key aspects of AI (e.g. human control and oversight, transparency, explicability); (iii) the societal impact of AI.Footnote 125

Thus, the Feasibility Study refers to human dignity, the right to non-discrimination, the right to effective remedy and other rights and freedoms enshrined in international human rights law. But it also makes new claims, such as: the right to be informed that one is interacting with an AI system rather than with a human being (especially where there is a risk of confusion which can affect human dignity);Footnote 126 the right to challenge decisions informed and/or made by an AI system and demand that such decisions be reviewed by a human being; the right to freely refuse AI-enabled manipulation, individualised profiling and predictions, even in the case of non-personal data processing; the right to interact with a human being rather than a robot (unless ruled out on legitimate overriding and competing grounds).Footnote 127

In considering these proposals, it is worth noting that the Feasibly Study is not a legal document and uses the language of a policy document rather than the technical language of a legal text like regulation. Many of these rights are not therefore new stand-alone rights, but intended (including through creative interpretation) to complement already existing rights and freedoms, as part of the Council of Europe’s contextualisation and concretisation of the law to deal with AI and human rights.

Along with these proposals for the future legal framework, the Feasibility Study also suggests several policy initiatives to be further developed by non-binding instruments or industrial policy, such as those on auditing processes, diversity and gender balance in the AI workforce or environmental-friendly AI development policies.Footnote 128

In line with the CAHAI mandate and the Council of Europe’s field of action, the path marked out by the Feasibility Study also includes two sections on democracy and the rule of law.Footnote 129 While extension of the proposed rights and obligations to these fields is significantly narrower than those on human rights, this move is atypical in the global scenario of AI regulation, which tends to exclude holistic solutions comprising democracy and the rule of law, or rely on sector-specific guidelines to address these questions.Footnote 130

Regarding democracy, the most important rights with regard to AI are those concerning democratic participation and the electoral process, diverse information, free discourse and access to a plurality of ideas, and good governance. They also entail the adoption of specific policies on public procurement, public sector oversight, access to relevant information on AI systems, and fostering digital literacy and skills.

As for the rule of law, the main risks concern the use of AI in the field of justice. Here the Feasibility Study refers to the right to judicial independence and impartiality, the right to legal assistance, and the right to effective remedy. In policy terms, Member States are encouraged to provide meaningful information to individuals on the AI systems used in justice and law enforcement, and to ensure these systems do not interfere with the judicial independence of the court.Footnote 131

The Council of Europe thus takes a risk-based approach to AIFootnote 132 including introducing risk assessment criteria, ‘red lines’ for AI compatibility with human rights, and mechanisms for periodic review and audits.Footnote 133

More specifically, the Feasibility Study considers risk assessment and management as part of the wider human rights due diligence process and as an ongoing assessment process rather than a static exercise.Footnote 134 For the future development of its impact assessment approach, the study takes as a reference framework the “factors that are commonly used in risk-impact assessments”. It explicitly mentions the following main parameters: (i) the potential extent of the adverse effects on human rights, democracy and the rule of law; (ii) the likelihood that an adverse impact might occur; (iii) the scale and ubiquity of such impact, its geographical reach, its temporal extension; and (iv) the extent to which the potential adverse effects are reversible.Footnote 135

On the basis of this Feasibility Study, the CAHAI created three working groups:Footnote 136 the Policy Development Group (CAHAI-PDG) focused on policies for AI development (soft law component); the Consultations and Outreach Group (CAHAI-COG) tasked with developing consultations with various stakeholders on key areas of the Feasibility Study and the CAHAI’s ongoing activity; and the Legal Frameworks Group (CAHAI-LFG) centred on drafting proposals for the future legal framework (hard law component). Though from different angles, these three working groups all adopt the Council of Europe’s risk-based approach and its implementation through impact assessment tools and provisions.

The main outcomes are expected to come from the CAHAI-LFG, in the form of binding provisions on impact assessment, and the CAHAI-PDG, with the development of an impact assessment model centred on human rights, democracy, and the rule of law. The CAHAI-COG multi-stakeholder consultations found clear expectations of the impact assessment in AI regulation, and stakeholders saw this as the most important mechanism in the Council of Europe’s new framework.Footnote 137

Based on the CAHAI’s work, and the more specific contribution of the CAHAI-LFG working group, the Council of Europe’s risk-based AI model will introduce an assessment of the impact of AI applications on human rights, democracy, and the rule of law.Footnote 138 While the HRIA is not new, as discussed above, the inclusion of democracy and the rule of law is innovative and challenging.

The democratic process, and democracy in its different expressions, covers a range of topics and it is not easy, from a methodological perspective, to assess the impact on it of a technology or its applications, particularly since it is hard to assess the level of democracy itself.

This does not mean that it is impossible to carry out an impact assessment on specific fields of democratic life, such as the right to participation or access to pluralist information, but this remains a HRIA, albeit one centred on civil and political rights.Footnote 139 Evaluation of the impact of AI on democracy and its dynamics in general is still quite difficult.Footnote 140

Different considerations regard the rule of law, where the more structured field of justice plus the limited application of AI make it easier to envisage uses and foresee their impact on a more uniform and regulated set of principles and procedures than democracy. Here again however, the specificity of the field and the interests involved may raise some doubts about the need for an integrated risk assessment model – including human rights, democracy, and the rule of law – as opposed to a more circumscribed assessment of the impact of certain AI applications on the rule of law.

The HUDERIA (HUman rights, DEmocracy and the Rule of law Impact Assessment)Footnote 141 proposed by the CAHAI therefore seems much more challenging in its transition from theoretical formulation to concrete implementation than the HRESIA, given the latter’s modular structure and its distinction between human rights assessment (entrusted to the customised HRIA) and the social and ethical assessment (entrusted to committees of experts). The HUDERIA’s difficulties appear to be confirmed by the slower progress of the CAHAI-PDG’s work on this model compared with the rest of the CAHAI’s activities.

Looking at the criteria proposed by the CAHAI-LFG for the impact assessment, they are largely those commonly used in impact assessment theory, i.e. likelihood and severity. Several factors are considered in relation to the severity of the impact (gravity, number of people affected, characteristics of impacted groups, geographical and demographical reach, territorial extension, extent of adverse effects and their reversibility, cumulative impact, likelihood of exacerbating existing biases, stereotypes, discrimination and inequalities). The assessment model should also consider further concurring factors, such as AI-specific risk increasing factors, the context and purpose of AI use, possible mitigation measures, and the dependence of potentially affected persons on decisions based on AI.Footnote 142

The model envisaged is based on the traditional five risk levels (no risk, low, medium, high, extreme). The proposed provisions also leave room for the precautionary principle when it is impossible to assess the envisaged negative impact.

Finally, the level of transparency of the results of the assessment – in terms of their publicly availability –, accountability, auditability and transparency of the process are also considered in the CAHAI-LFG proposal.

At the time of writing, the proposed HUDERIA model adopts a four-stage iterative and participatory model – identification of relevant rights, assessment of the impact on those rights, governance mechanisms, continuous evaluation – which are common to all impact assessments. Its distinguishing feature is “that it includes specific analysis of impact on fundamental rights proxies which are directed towards the Rule of Law and Democracy”.Footnote 143 In this the CAHAI documents do not limit the impact assessment obligations to specific AI applications in certain fields, a (high) level of risk or the nature and purpose of the technology adopted.

4.3.2 The European Commission’s Proposal (AIA) and Its Conformity-Oriented Approach

After an initial approach centred on ethicsFootnote 144 and the White Paper on Artificial Intelligence,Footnote 145 in April 2021 the European Commission proposed an EU regulation on AI (hereinafter the AIA Proposal).Footnote 146 This proposal introduces two new elements: the departure from more uncertain ethical grounds towards the adoption of a hard law instrument, albeit within the familiar framework of co-regulation;Footnote 147 the adoption of a regulation in the absence of national laws on AI or differing approaches among EU Member States.

The latter aspect highlights the EU legislator’s concerns about the rapid development of AI, the EU’s limited competitive power in this area in terms of market share, and the need to address the public’s increasing worries about AI which might hamper its development.Footnote 148 The typical harmonisation goal of EU regulations – not applicable here in the absence of national laws on AI – is therefore replaced by a clear industrial strategy objective embodying a stronger and more centralised regulatory approach by the Commission which is reflected in the AIA Proposal.

As in the case of data protection, the EU proposal therefore stands within the framework of internal market interests, while protecting fundamental rights.Footnote 149 This focus on the market and competition appears to be the main rationale behind regulating an as yet unregulated field, designed to encourage AI investment in the EU.Footnote 150 It also emerged clearly from the four objectives of the proposed regulation: (i) ensure that AI systems marketed and used in the Union are safe and respect existing law on fundamental rights and Union values; (ii) guarantee legal certainty to facilitate investment and innovation in AI; (iii) enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems; (iv) facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation.Footnote 151

In this context a central role is necessarily played by risk regulation, as in the first generation of data protection law where citizens were concerned about the potential misuse of their data and public and (some) private entities were aware of the value of personal data in enabling them to carry out their work. For this reason, the EU proposal wishes to limit itself to the “minimum necessary requirements to address the risks and problems linked to AI, without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market”.Footnote 152

These goals and the framing of the risk-based approach reveal how the EU differs from the Council of Europe, which places greater emphasis on the safeguarding of human rights and fundamental freedoms. This inevitably impacts on the risk management solutions outlined in the AIA Proposal.

The European Commission’s ‘proportionate’Footnote 153 risk-based approach addresses four level of risks: (i) extreme risk applications, which are prohibited;Footnote 154 (ii) high risk applications, dealt with by a conformity assessment (where HRIA is only one of its components); (iii) a limited number of applications that have a significant potential to manipulate persons, which must comply with certain transparency obligations; (iv) no high-risk uses, dealt with by codes of conduct designed to foster compliance with AIA main requirements.Footnote 155 Of these, the most important from a human rights impact assessment perspective are the provisions on high risk applications.

The first aspect emerging from these provisions is the combination, under the category of high-risk applications, of AI solutions impacting on two different categories of protected interests: physical integrity, where AI systems are safety components of products/systems or are themselves products/systems regulated under the New Legislative Framework legislation (e.g. machinery, toys, medical devices, etc.),Footnote 156 and human rights in the case of so-called stand-alone AI systems.Footnote 157

Safety and human rights are two distinct realms. An AI-equipped toy may raise concerns around its safety, but have no or only limited impact on human rights (e.g. partially automated children’s cars). Meanwhile another may raise concerns largely in relation to human rights (e.g. the smart doll discussed in Chap. 2). AI may have a negative impact and entail new risks for both safety and human rights, but the fields, and related risks, are separate and require different remedies. This does not mean that an integrated model is impossible or even undesirable, but that different assessments and specific requirements are essential.

Looking at the risk model outlined by the AIA Proposal, its structure is based on Article 9. The chief obligations on providers of high-risk AI systems,Footnote 158 as set out in Article 16, regard the performance of a conformity assessment (Articles 19 and 43, Annexes VI and VII) and the adoption of a quality management system (Article 17). The conformity assessment – except for the AI regime for biometric identification and the categorisation of natural personsFootnote 159 and the AI applications regulated under the New Legislative Framework (NLF) legislation – is an internal self-assessment process based on the requirements set out in Annex VI. This Annex requires an established quality management system in compliance with Article 17 whose main components include the risk management system referred to in Article 9.Footnote 160

In this rather convoluted structure of the AIA Proposal, Article 9 and its risk management system is the key component of a combined conformity assessment and quality management system. Indeed, the quality management system comprises a range of elements which play a complementary role in risk management. However, the risk assessment and management model defined by Article 9 is based on three traditional stages: risk identification, estimation/evaluation, and mitigation.

The peculiarity of the AIA model consists in the fact that the risk assessment is performed in situations that are already classified by the AIA as high-risk cases. In the EU’s proposal, the risk-based approach consists mainly of risk mitigation rather than risk estimation.

The proposal makes a distinction between use of AI in products already regulated under safety provisions, with some significant exceptions,Footnote 161 and the rest. In the first group, AI is either a safety component of these productsFootnote 162 or itself a product in this category. The second group consists of stand-alone AI systems not covered by the safety regulations but which, according to the European Commission, carry a high-risk.

This classification emphasises the importance of the high-risk evaluation set out in the AIA Proposal. With regulated safety applications, risk analysis is only broadened from safety to the HRIA.Footnote 163 For stand-alone AI systems, on the other hand, it introduces the completely new regulation based on a comprehensive conformity assessment, which includes the impact on fundamental rights.

However, the approach adopted raises questions concerning the following issues: (i) a top-down and more rigid system of high-risk assessment; (ii) a critical barrier between high risk and lower risk; (iii) opaque regulation of technology assessment (Annex III) and risk assessment carried out by providers (Article 9); (iv) use of the notion of acceptability; (v) marginalisation of the role of AI system users. These elements, discussed below, all reveal the distinction between the AIA Proposal’s complicated model of risk management and the HRIA’s cleaner model based on a general risk assessment.Footnote 164

Given the variety of fields of application of AI and the level of innovation in this area, dividing high-risk applications into eight categories and several sub-fields seems to underestimate the evolving complexity of the technology scenario.

Considering how rapidly AI technology is evolving and the unexpected discoveries regarding its abilities,Footnote 165 a closed list of typical high-risk applications may not be easy to keep up-to-date properly or promptly.Footnote 166 In addition, the decision to delegate such a key aspect to the Commission, the EU’s executive body,Footnote 167 is likely to raise concerns in terms of power allocation.

A closed list approach (albeit using broad definitions and open to updating) appears to be reactive rather than preventive in anticipating technology development. By contrast, a general obligation of an AI impact assessment (HRIA) does not suffer from this shortcoming and can act more swiftly in detecting critical new applications. Moreover, a general risk assessment removes the burden of rapidly updating the list of stand-alone high-risk applications, which can remain an open list of presumed high-risk cases, as in Article 35.3 of the GDPR.

The focus on a list of high-risk cases also introduces a barrier between them and the rest where risks are lower. This sharp dichotomy contrasts with the more nuanced consideration of risk and its variability depending on the different technology solutions, contexts, etc. Furthermore, a rigid classification of high-risk applications leaves room for operators wishing to circumvent the regulation by denying that their system falls into one of the listed categories.Footnote 168

Finally, as pointed out in Chap. 2, this cumulative quantification of the level of risk of a given application (described as a high-risk use of AI) contradicts the necessarily multifaced impact of AI applications, which usually concerns different rights and freedoms. The impact may therefore be high with respect to some rights and medium or law with respect to others. The different nature of the impacted rights does not make it possible to define an overall risk level.

The only possible conclusion is that if there is a high risk of a negative impact on even one right or freedom, the overall risk of AI application is high. This is in line with the idea that all human rights must be protected and the indivisible, interdependent and interrelated nature of human rights.

The categories of high-risk application set out in Annex III are defined on the basis of a technology assessment resting on four key elements: (i) AI system characteristics (purpose of the system and extent of its use or likely use); (ii) harm/impact (caused or foreseen harm to health and safety or adverse impacts on fundamental rights; potential extent of such harm or such adverse impacts; reversibility); (iii) condition of affected people (dependency or vulnerability); (iv) legal protection (measures of redressFootnote 169 or to prevent or substantially minimise those risks).

This is necessarily an abstract exercise by the legislator (and in future by the Commission) which uses a future scenario approach or, when referring to existing practices, generalises or aggregates several cases. The assessment required by Article 9 on the other hand is a context-specific evaluation based on the nature of the particular case of AI application. These different types of assessment suggest that the applications listed in Annex III, in their context-specific use, may not entail the high level of risk presumed by the Regulation.

In addition, the Proposal fails to explain how and on the basis of which parameters, and method of evaluation, these risks should be assessed in relation to specific AI applications, according to Article 9. Nor, with regard to the general technology assessment used for the Annex III list, does the Commission’s Proposal provide transparency on the methodology and criteria adopted.Footnote 170

Another aspect that requires attention is the relationship between high-risk, residual risk and acceptability.Footnote 171 Risk assessment and mitigation measures should act in such a way that the risk “associated with each hazard as well as the overall residual risk of the high-risk AI systems is judged acceptable”. But the AIA Proposal fails to provide a definition of acceptable risk.

The notion of acceptable risk comes from product safety regulation, while in the field of fundamental rights the main risk factor is proportionality. While acceptability is largely a social criterion,Footnote 172 Article 2(b) of Directive 2001/95/EC on general product safety define a safe product as one that “does not present any risk or only the minimum risks compatible with the product’s use, considered to be acceptable”. Here acceptability results from an absence of risk or “minimum risks”, which is necessarily context-dependentFootnote 173 and suggests a case-specific application of the criteria set out in Article 7.2 of the AIA Proposal. What is more, these criteria – like the focus on the product characteristics, the categories of consumers at risk and the measures to prevent or substantially minimise the risks – are coherent with those considered by Article 2(b) of Directive 2001/95/EC.

If we accept this interpretation, acceptability is incompatible with AI’s high risk of adverse impacts on fundamental rights and any impact assessment based on a quantification of risk levels will play a crucial part in risk management.

Finally, the AIA Proposal marginalises the role of the AI users. They play no part in the risk management process and have no obligations in this regard, even though AI providers market solutions that are customisable by users. AI usersFootnote 174 may independently increase or alter the risks of harm to health and safety by their particular use of the systems, especially in terms of impact on individual and collective rights, given their variety and context dependence.

For example, an AI company can offer a platform for participatory democracy, but its implementation can be affected by exclusion biases depending on the user’s choice of settings and the specific context. AI providers cannot fully take into account such contextual variables or foresee the potentially affected categories, so their adoption of a general mitigation strategy will have only a limited effect.Footnote 175 Risk management and risk assessment duties should therefore also apply to AI users in proportion to their role in system design and deployment.

In line with risk management theory and practice, a circular iterative approach is adopted by the AIA Proposal, including post-market monitoring.Footnote 176 This is crucial since lifelong monitoring, learning and re-assessment are essential elements in an evolving technology scenario where risk levels may change over time.Footnote 177

Considering the AIA Proposal as a whole, the legislator’s rationale is to largely exempt AI users (i.e. entities using AI systems under their own authority) from risk management duties and to avoid creating extensive obligations for the AI producers, limiting the regulatory impact only to specific sectors, characterised by potential new AI-related risks or the use of AI in already regulated product safety areas.

While this is effective in terms of policy impact and acceptability, it is a weak form of risk prevention. The Proposal makes a quite rigid distinction between high-level risk and the rest, providing no methodology to assess the former, and largely exempting the latter from any mitigation (with the limited exception of transparency obligations in certain cases).

In addition, two large elements are missing from the EU’s Proposal: integration between law and ethical/societal issues and the role of participation. As for the first, following several years of discussion of the ethical dimension of AI, the prevailing vision seems to be to delegate ethical issues to other initiativesFootnote 178 not integrated with the legal assessment. In the same way that focusing exclusively on ethics was critical,Footnote 179 this lack of integration between the legal and societal impacts of AI is problematic. An integrated assessment model, like the HRESIA, could overcome this limitation in line with the proposed risk-based model.

Equally, introducing a participatory dimension to the assessment model, covering both legal and societal issues, would bridge the second gap, related to the lack of participation, and align the AIA proposal with the emphasis on civic engagement of other EU initiatives and a vision of AI use for the benefit of citizens.Footnote 180

4.4 The HRESIA Model’s Contribution to the Different Approaches

Looking at the three approaches to AI regulation described at the beginning of this chapter, neither the Council of Europe nor the European Commission decided to adopt a principles-based approach. This, even though several of the key principles enshrined in binding and non-binding human rights instruments can be valuable – with due contextualisation – to AI regulation and are also partially reflected in the proposals of both bodies.

The predominant focus on risk and accountability is probably due to the reductive and incremental approach of this first stage of AI regulation, as was the case with data protection in the 1970s or with regard to product safety in the first phase of industrial mass production.Footnote 181 As with the early data protection regulations, the priority is to establish specific procedural, technical and organisational safeguards against the most serious risks rather than building a clear and complete set of governing principles.

The EU’s closed list of high-risk systems, and the Council of Europe’s key guiding principles for AI development and use reflect the fact that these proposals represent the first generation of AI regulation.

As with data protection,Footnote 182 further regulation will probably follow, broader in scope and establishing a stronger set of guiding principles. In regard to the EU initiative, a fuller consideration of the potential widespread impact of non-high-risk applications and the challenges of rigid pre-determined risk evaluation systems could provide more effective protection of individual rights and collective interests.

Both proposals are also characterised by a focus on the legal dimension at the expense of a more holistic approach covering ethical and societal issues, which are either ignored or delegated to non-legal instruments.

This gap could be bridged by a hybrid model, such as the HRESIA, combining human rights and ethical and societal assessments to give a more complete view of the consequences of AI applications and affect their design. This is even more important in the case of large-scale projects or those with significant effects on social communities.

In addition, the key notion of acceptability in the AIA Proposal,Footnote 183 discussed in the previous section, necessarily implies the value of the HRIA to assess the impact on fundamental rights covered by Article 9. But it would also benefit from the broader HRESIA model given the societal dimension of acceptabilityFootnote 184 which should be paid greater attention with regard to each context-specific AI application and addressed by expert committees, as described in Chap. 3.

Regarding the costs and resources involved in extending the HRESIA, we should recall the considerations expressed above about the model’s modularity and scalability.Footnote 185 Based on a HRIA and adopting internal advisors for the societal issues, the burdens are proportional to the impact of the technology and minimum or negligible in the case of low risk. Moreover, the experience gained by the HRESIA experts would further reduce the costs in relation to the frequency of the assessments.

Both the Council of Europe and the European Commission suggest a self-assessment procedure in line with the HRESIA model. The latter also includes a layer of participation, which is mentioned by the Council of EuropeFootnote 186 and one of the recognised shortcomings of the AIA Proposal.

The EU Proposal limits the obligation to perform an impact assessment to AI providers, in line with the thinking behind product safety regulation. However, a more nuanced approach is required, given the part played by providers and users in the development, deployment and use of AI applications, and the potential impacts of each stage on human rights and freedoms.

It is worth remembering that AI differs from data protection in the greater role that AI providers play in the complicated and often obscure AI processing operations.Footnote 187 This makes it inappropriate to recreate the controller/provider distinction, albeit with different nuances,Footnote 188 regardless of the criticisms expressed about the distinction itself.Footnote 189 Still, the effective role played by AI usersFootnote 190 in system design and deployment should be addressed by their involvement in risk management and assessment duties.

This can be achieved for most of the AI systems in use, excepting those cases where the user has little ability to customise or train the system for a specific context, and a HRESIA should be performed by all entities that use third-party AI services for their own purposes. This does not mean that the HRESIA cannot be used by producers in the design of their systems. but suggests a model – already proposed in data protection regulationFootnote 191 –, in which the providers perform the HRESIA on their products, but AI users perform their own HRESIA with regard to specific implementation.

Finally, both the Council of Europe and the European Commission base their approaches on risk assessment and a series of variables to be considered but fail to specify a method of assessing the level of risk, making them difficult to put into practice.Footnote 192 In contrast, the HRESIA not only identifies the assessment criteria but also explains a how to go about defining the risk levels and evaluating the systems.

With its nature, scope, and methodology the HRESIA model not only responds to AI impact assessment requirements of the European proposals, but it could also address the shortcomings of the proposed provisions and serve as a model that is as yet absent in the ongoing work of these regulatory bodies.

4.5 Summary

The ongoing debate on AI in Europe has been characterised by a shift in focus, from the identification of guiding ethical principles to a first generation of legal obligations on AI providers.

Although the debate on AI regulation is still fluid at a global level and the European initiatives are in their early stages, three possible approaches are emerging to ground AI regulation on human rights.

One option is a principles-based approach, comprising guiding principles derived from existing international binding and non-binding human rights instruments, which could provide a comprehensive framework for AI, in line with previous models such as Convention 108 or the Oviedo Convention.

A different approach focuses more narrowly on the impacts of AI on individual rights and their safeguarding through rights-based risk assessment. This is the path followed by the Council of Europe in its ongoing work on AI regulation.

Finally, as outlined in the EU proposal, greater emphasis can be placed on managing high-risk applications focusing on product safety and conformity assessment, combining safety and rights protection with a predefined risk classification.

Despite the differences between these three models, they each share a core concern with protecting human rights, recognised as a key issue in all of them. Moreover, while this first generation of AI regulation reveals a pragmatic approach with a focus on risk management at the expense of a framework of guiding principles and a broader consideration of the role of AI in society, this does not rule out a greater emphasis on these aspects in future regulation, as happened with data protection.

Identifying a common core of principles can be of help for this second stage of AI regulation. In the end, therefore, all three approaches can contribute in different ways and probably with different timescales to posing the building blocks of AI regulation.

In these early proposals for AI regulation, the emphasis on risk management is not accompanied by effective models to assess the impact of AI on human rights. Following the turn from ethical guidelines to legal provisions, there are no specific instruments to assess not just the legal compliance of AI solutions, but their social acceptability, including a participatory evaluation of their coherence with the values of the target communities.

Analysis of the current debate confirms that the HRESIA may not only be an effective response to human-rights oriented AI development which also encompasses societal values, but it may also bridge a gap in the present regulatory proposals. Furthermore, a general risk assessment methodology is better suited to the variety of AI and technology developments than regulatory models based on a predefined list of high-risk applications or, at any rate, might represent a better guide to rule-makers in their definition.