Keywords

5.1 Addressing the Challenges of AI

For more than fifty years the progressive digitalisation and datafication of our societies and their impact on individuals have been largely managed by legislators through data protection laws. In a world concerned about the use (and misuse) of personal information, data protection became the key component in the response at individual and social level.

Since its origins, data protection has been seen as an enabling right to tackle potential risks concerning discrimination, undemocratic social control, invasion of private life, and limitations on several freedoms, such as freedom of thought, expression, and association.

However, this link between data protection and human rights (fundamental rights in the EU) has not been explored in the cases decided by the data protection authorities or in the literature.Footnote 1 Although the relationship between data protection and other competing rights has been considered in court decisions, the theory and practice of data protection remain largely remote from human rights doctrine and the attention of human rights experts. This also reflects the different backgrounds of the main scientific communities in these fields. Privacy scholars traditionally come from private, constitutional or administrative law, while human rights scholars have an international law background and are more focused on prejudice to human rights other than privacy and data protection.

This barrier between the two areas has collapsed under the blows of the latest wave of AI development, since the last decade of the twentieth century to the present day. Pervasive datafication together with the use of AI for a variety of activities impacting on society, from medicine to crime prevention, has raised serious concerns about the potentially harmful effects of data-intensive AI systems. This has led legislators and policymakers to look beyond data and data protection to consider the different ways in which AI might interfere with human organisations and behaviour, from automated decision-making process to behavioural targeting.

The breadth of the questions raised by AI and the relationship between machines (and those who determine their underlying values) and humans, the struggle of traditional data protection principles to fully address these new and broader issues,Footnote 2 and the limited discussion of human rights in AI led business and regulators to look to ethics for answers to these challenges.

However, the variety of ethical approaches stood in contrast to the need for a common framework in a world of global players and the same models replicated in different countries. This has led AI regulators to the current debate on a future legal framework, where human rights represent a key component in addressing the potential risks of AI.

Having briefly summarised the trajectory and after highlighting the valuable contribution that an assessment model encompassing human rights, ethical and societal issues can provide, the big challenge that still faces us is how to implement this approach in tangible reality. Two different scenarios have to be taken into account: (i) AI development and use in countries where human rights are protected by national law and where compliance is therefore mandatory on business and the public sector, and (ii) AI development and use, by companies and their subsidiaries and suppliers, in countries where those rights are not fully protected, or not protected at all, despite the ratification of international human rights treaties. In any case, it has to be remembered that, in both cases, ethical and social issues remain largely outside the legal discourse and an awareness of AI’s impact in these spheres remains lacking.

While in the first scenario HRESIA can be more easily implemented, where business is conducted in the absence of national human rights safeguards, the United Nations’ Guiding Principles on Business and Human Rights may be of help.Footnote 3 These Principles, and specifically Section II on corporate responsibility to respect human rights, enshrine several key HRIA requirements (stakeholder consultation, regular assessment, transparency, role of experts, etc.).Footnote 4 While this is not a legally binding instrument, it does represent an influential global model in addressing the relationship between human rights and business.Footnote 5

However, despite the presence of this authoritative framework, the impact of these principles is still limited, perhaps because of their focus on the entire value chain, which normally demands an extensive effort in all directions.Footnote 6 The ongoing debate on the Guiding Principles on Business and Human Rights and the challenges their application raises may point the way to narrower product-focused human rights assessments, such as the HRESIA, which spotlights the design of each product or service, rather than targeting the entire business.Footnote 7

If the lack of legal safeguards for human rights at a national level is problematic, the situation is much more complicated when we consider the ethical and societal values underpinning AI development and use. Here, even proposed human rights-oriented regulations do not specifically address the societal acceptability of AI, and its compatibility with societal values is not fully reflected in the law.Footnote 8

Rather than try to arrive at improbable universal ethical and social values or, on the contrary, shape codes of ethics to fit corporate values, the best solution is probably to use experts to understand the context. Experts can help identify underlying societal values and also make for greater accuracy and inclusion through active dialogue with shareholders and participation.Footnote 9

5.2 The Global Dimension of AI

As in the case of data processing, the global use of AI technologies is making regulation a pressing challenge. Although only a few proposals for AI regulation are available and as yet in their early stages, we can envisage what might happen in the future in terms of global regulatory competition and fragmentation.

On the one hand, Europe might build on its front runner status in data protection, to reproduce for AI the so-called Brussels effect,Footnote 10 as well as the Strasbourg effect,Footnote 11 exporting its regulatory model and risk-based approach including attention to human/fundamental rights.

On the other, it is worth recalling the limits of the universal human rights positionFootnote 12 and European legislators’ dependence on the European Court of Human Rights and the European Court of Justice, making it hard to export the European models to different legal contexts.Footnote 13

In addition, regulatory fragmentation at a regional level may ensue from state policies targeting digital sovereignty, either with the intention to bolster human rights or on the contrary in countries wishing to limit these individual rights and freedoms.

This scenario is not new and was seen already with respect to data protection. Data localisation obligations and restrictions on transborder data flows were introduced by European countries under Convention 108 or the GDPR to provide their citizens with a greater level of protection than third countries with weaker data protection regimes, or to safeguard competing interests (national security, defence, public safety, etc.).Footnote 14 Meanwhile, some countries have introduced rules on transborder data flows and data localisation for foreign service providers, not to safeguard human rights, but as a means to secure governmental control over their citizens’ online behaviour.

Replicating European progress in data protectionFootnote 15 in the regulation of AI around the world therefore looks unlikely. Despite the worldwide interest in the EU and Council of Europe AI initiatives, we must remember that Convention 108 dates back to 1981 and the GDPR was built on a 1995 Directive. While we might envisage a Brussels/Strasbourg effect for AI, even conceding a faster international harmonisation in response to the globalisation of services, needs and trends, it is unrealistic to expect a common legal framework on AI to be realised any time soon. This is partly due to the difficulties of exporting the European models noted above, but also to the varying regulatory approaches of some states, in particular with respect to recognising human rights.

This means that at present a holistic assessment model, which includes the contextualisation of human rights and socio-ethical values in a given area, could be an effective answer for both the countries which have human rights-based AI regulation and those who do not. For the former, the HRESIA could be integrated into proposed AI risk assessment procedures,Footnote 16 while in the latter it would help companies and other bodies develop a new approach, recognising the impact of AI applications on society in line with human rights-oriented business practices.

Indeed, assessment models like the HRESIA do not need to be mandatory but could be voluntarily included in business and public sector best practices when dealing with legal and societal needs. Of course, the mandatory or voluntary obligation to carry out the assessment would impact its adoption and the achievement of its goals.

The absence of a mandatory obligation would only reinforce concerns already expressed about the self-assessment of AI risks,Footnote 17 pointing to the conflicting interests of AI manufacturers and users. Further, while the danger of unfair risk assessment exists, both the mandatory and voluntary schemes are open to manipulation, and internal mitigation measures could be taken to combat this.Footnote 18

Moreover, the new notion of trustworthy AI, though based on a non-legal and uncertain frame of reference (trust), highlights the importance of the relationship between AI providers/users and end-users. A wider adoption of impact assessments by providers/users can certainly play a part in boosting confidence among AI end-users.

Given the increasing public concern for invasive and pervasive data-intensive applications,Footnote 19 plus the growing attention of policy makers for the side effects of their use in the presence of concentration of power in digital services, building trust has become a major goal for AI providers and users. Though a variety of strategies (including marketing) can be used to achieve this, implementation of a risk assessment model with its transparent outcomes and practices can be an effective way to develop genuinely trustworthy AI.

Adopting holistic assessment and values-oriented design procedures such as the HRESIA could therefore replicate in AI the experience and results achieved in other sectors with regard to human rights and ethical practice, including the repercussions for business reputationFootnote 20 and consumer/investor choicesFootnote 21 (e.g. fair trade labels).Footnote 22 The implementation might even be certified. Here, the effect on the biggest AI adopters (e.g. municipalities) would be even more significant if they were accountable to AI end-users.

Besides, a greater focus on these requirements by the big players and in public procurementFootnote 23 could also help override the scarce interest in these issues of many AI start-ups and SMEs. A bottom-up demand for responsible AI, supported by appropriate assessment models, could counter the lack of focus on societal and human rights questions due to an absence of competence or attention to aspects that are not immediately related to business profits.Footnote 24

On the other hand, following the European model in introducing a mandatory AI human rights impact assessmentFootnote 25– hopefully extended to non-legal societal issues – would undoubtedly foster a quicker diffusion of this practice.Footnote 26 But this option has its own implications that need to be thought through.

In the first place, a universal mandatory assessment might provoke adverse reactions from businesses complaining of additional burdens and costs. While these are proportional to the complexity of the AI and risks in question, legislators could be induced (see the EU proposal) to restrict mandatory assessments to certain categories of applications. This could result in a dual situation, with some areas fully secured and monitored (or even over-scrutinised, given the broad categories in the AIA proposal, potentially including non high-risk applications) while other widespread AI uses go largely unregulated despite their not insignificant risks.

Second, the history of data protection reveals the difference between the ambitions of the law and its concrete implementation. Underfunded and understaffed supervisory authorities, pervasive adoption of data-intensive solutions, obscurity of processing operations, foreign providers, interplay between AI developers and governments, are all factors that may reduce the enforcement of mandatory solutions, as happened with data protection.Footnote 27

Very likely in coming years both mandatory and non-mandatory AI risk assessment models will coexist and may include the adoption of technical standards. A middle way based on ex post assessment is also possible, in response to concerns by some supervisory authorities. Here the dual dimension of the HRESIA model, in its universal and local treatment of human rights and societal values, might also make it a useful tool for supervisory authorities.

Finally, the global scenario in which AI should be seen also highlights the value of a risk-based approach from the perspective of the historical development of system use. Particularly in the public sector, the lack of attention to human rights and societal impact can encourage a sort of development bias, which sees only the positive results of AI and disregards or underestimates potential misuse. As recently demonstrated by the use of data-intensive biometric systems in AfghanistanFootnote 28 (as well as some contact-tracing applications during the Covid-19 pandemicFootnote 29), the lack of a holistic assessment of the potential consequences of AI-based systems can be damaging. It also fails to give voice to minorities, affected groups and stakeholders, leading to technology-driven solutions whose efficiency is not accompanied by an absence of risks when operating conditions or the system controllers change.

5.3 Future Scenarios

A thread running through this book has been the idea of looking beyond data protection to tackle the challenges of AI and avoid a split between the focus on human rights and ethics in the broader sense. While today a growing number of voices are calling for a human rights assessment, this option was largely unexplored at the start of this research, and the question of how to put a human rights-based approach to AI into practice remains little examined.

The first chapter pointed out the reason for this change of focus in the regulation of AI data-intensive systems from data protection to human rights and highlighted the role that assessment methodologies can play in this change.

A workable methodology that responds to the new paradigm can also help to bridge the gap between the ethical guidelines and practices developed in the last few years and the more recent hard law approach. Here the regulatory turn missed an opportunity to combine these two realms, both of which are significant when AI applications are used in a social context and have an impact on individuals and groups.

Shaping AI on the basis a paradigm that rests on legal and societal values through risk assessment procedures does not mean simply crafting a questionnaire with separate blocks of questions for legal issues, ethical values and social impact. Such a simplistic approach tends to overestimate the value of the questionnaire-based self-assessmentFootnote 30 and ignores the challenges associated with the idea that AI developers/users can fully perform this evaluation as if it were a mere checklist.

Chapters 2 and 3 therefore outline a more elaborate model, the HRESIA (Human Rights, Ethical and Social Impact Assessment), which combines different tools ranging from self-assessment, expert panels, to participation. The biggest distinction to be made here is between the Human Rights Impact Assessment (HRIA) module of the HRESIA and the complete evaluation of ethical and societal values. While the first is based on questionnaires and risk models, the second is characterised by a greater role for experts and participation in identifying the values to be embedded in AI solutions. Furthermore, the HRIA component, though based on lengthy experience in human rights assessment, has reshaped the traditional model to make it better suited to AI applications and an increasingly popular regulatory approach based on risk thresholds and prior assessment.

This interplay between risk assessment and AI regulation led to an examination of the major current proposals, presented by the European Commission and the Council of Europe. Chapter 4 emphasised their limitations compared with the HRESIA model, by not including ethical and social issues and (in the EU case) restricting risk assessment to predefined high-risk categories. It should be noted however that the Council of Europe’s proposal does broaden the assessment to include democracy and the rule of law, in line with its mandate, but at the same time making it more complicated to envisage a feasible assessment model that properly covers all these issues without reducing them to a mere list of questions.

As regards the social and ethical components in the design and operation of AI systems and assessing their coherence with contextual values, Chap. 3 explored the practices of ethics committees considering both committees set up by companies and committees in the field of medical ethics and research. Their experience, and their shortcomings, were used to highlight the role of experts in the HRESIA in identifying key societal values and also to outline how these committees might work, including with the participation of major stakeholders and groups potentially affected by AI applications.

Comparison of the HRESIA with its various components and the ongoing proposals for AI regulation show how the HRESIA can represent a better implementation of the risk-based approach adopted by European legislators and, in a global perspective, encourage a focus on the holistic consequences for society in countries where there are no regulations.

Notwithstanding the positive outcomes that a better understanding of human rights and societal values can bring to AI design, development and use, the longer term poses further questions that are not fully addressed by the HRESIA and it may be that we have to raise the bar of human rights expectations with respect to an AI-based society. Three main issues will dominate discussion and analysis over the coming years: (i) partial reconsideration of the traditional theoretical framework of human rights; (ii) extension of the requirements concerning human rights safeguards, but also compliance with ethical and social values, to the entire AI supply chain; (iii) a broader reflection on digital ecosystems.

As for the first issue, there is an ongoing debate on the collective dimension of human rights which is leading us to reconsider the traditional view taken in this field.Footnote 31 The classification of the world by AI and its consequent decision-making processes, irrespective of the identity of the targeted persons and based merely on their belonging to a certain group, suggests we need a broader discussion of the largely individual nature of human rights.

Similarly, the traditional approach to non-discrimination should be reconsidered. Here intersectional studies and other theories can contribute to providing a legal framework more responsive to the new AI scenario.Footnote 32 Nevertheless, the variety of criteria used by business to discriminate in AI and their lack of a link to protected grounds suggests more research called for into the blurred confines between unfair discrimination and unfair commercial practices.Footnote 33

Moving from the theoretical framework to impact assessment implementation, this book has focused on the impact of AI-based solutions on their potential social targets, looking forward to the effects of AI use. But we need to extend the same attention to the upstream stage of this process, namely compliance with human rights and ethical values, as well as the social acceptability of manufacturing practices and the AI products/services supply chain.Footnote 34

New studies are emerging in this field,Footnote 35 but it remains largely unexplored, especially with regard to the possible solutions in terms of policies and regulation. Aspects such as labour exploitation or the environment impact of AI solutions need to be examined not only for the benefit of AI adoption and development, but also of competition. Existing and proposed barriers to market entry are based on legal requirements and standards on product safety and the human rights impact of AI use, but ignore human rights violations in the production of AI.

While some personal data protection is possible when data subjects belong to countries with robust data protection regulations,Footnote 36 in other cases rights and freedoms are more difficult to protect. This is particularly true when the legal systems of AI producer countries lack effective human rights protection or enforcement. The UN Guiding Principles on Business and Human Rights can serve as a guide in these cases.

Barriers to market access,Footnote 37 but also mandatory obligations on human rights and fundamental freedoms as well as due diligenceFootnote 38 for subcontractors can be an important step forward in extending human rights to upstream AI manufacturing, in part following the experience of data protection, but also the EU’s ethical rules on biomedicine and research. This would contribute to an improved AI ecosystem where respect for human rights and ethical and social values are widely accepted as a condition for doing business, in the same way ethical and legal compliance is a requirement of the pharma industry.

Reference to the AI ecosystem brings us to a final forward-looking scenario regarding the ability to outline an ecology for the digital environment, including AI-based applications which will increasingly become its dominant components.

Despite the limited investigation of this topic, we urgently need to revise the approach to digital technology adopted in the wake of the computer revolution in the 1950s. The increasing availability of new, more powerful and cheaper solutions led to the pervasive presence of digital technologies with their limitless appetite for data and the escalating reliance on them by decision makers. The result is a world that is seen more and more through the lens of algorithms and the social values and standpoints of their developers, often without questioning the real need for such systems.Footnote 39

Just as industrial consumer societies are raising questions about the ecological sustainability of the apparently endless abundance of goods and services, the digital society must also question the need for, and acceptability of, a society increasingly governed by pervasive AI. This includes critical questions about the lack of democratic participation and oversight in shaping and adopting AI solutions.

The starting point should not be to see technological evolution as an inevitability that society must adapt to, but to question the desirability of a society based on microtargeting, profiling, social mapping, etc. where the trade-offs for democracy, human rights and freedoms are not necessarily positive, except in the rhetoric of service providers and decision makers who place cost reductions and efficiency at the top of their scale of values.