Abstract
In a technology context dominated by data-intensive AI systems, the consequences of data processing are no longer restricted to the well-known privacy and data protection issues but encompass prejudices against a broader array of fundamental rights. Moreover, the tension between the extensive use of these systems, on the one hand, and the growing demand for ethically and socially responsible data use on the other, reveals the lack of a framework that can fully address the societal issues raised by AI.
Against this background, neither traditional data protection impact assessment models nor the broader social or ethical impact assessment procedures appear to provide an adequate answer to the challenges of our algorithmic society. In contrast, a human rights-centred assessment may offer a better answer to the demand for a more comprehensive assessment, including not only data protection, but also the effects of data use on other fundamental rights and freedoms.
Given the changes to society brought by technology and datafication, when applied to the field of AI the Human Rights Impact Assessment must then be enriched to consider ethical and societal issues, evolving into a more holistic Human Rights, Ethical and Social Impact Assessment (HRESIA), whose rationale and key elements are outlined in this chapter.
You have full access to this open access chapter, Download chapter PDF
Keywords
- AI
- Data protection
- Ethical Impact Assessment
- Human rights
- Privacy Impact Assessment
- Risk-based approach
- Self-determination
- Social Impact Assessment
1.1 Introduction
All AI applications rely on large datasets, to create algorithmic models, to train them, to run them over huge amounts of collected information and extract inferences, correlations, and new information for decision-making processes or other operations that, to some extent, replicate human cognitive abilities.
These results can be achieved using a variety of different mathematical and computer-based solutions, which are included under the umbrella term of AI.Footnote 1 Although they differ in their technicalities, they are all data-intensive systems and it is this factor that seems to be the most characteristic, rather than their human-like results.
We already have calculators, computers and many other devices that perform typical human tasks, in some cases reproducing our way of thinking or acting, as demonstrated by the spread of machine automation over the decades. The revolution is not so much the ‘intelligent’ machine, which we had already (e.g. expert systems), but the huge of information these machines can now use to achieve their results.Footnote 2 No human being is able to process such an amount of information in the same way or so quickly, reach the same conclusions (e.g. disease detection through diagnostic imaging) with the same accuracy (e.g. image detection and recognition) as AI.
These data-intensive AI systems thus undermine a core component of the individual’s ‘sovereignty’ over information:Footnote 3 the human ability to control, manage and use information in a clear, understandable and ex post verifiable way.
This is the most challenging aspect of these applications, often summed up with the metaphor of the black box.Footnote 4 Neither the large amounts of data – we have always had large datasetsFootnote 5 – nor data automation for human-like behaviour are the most significant new developments. It is the intensive nature of the processing, the size of the datasets, and the knowledge extraction power and complexity of the process that is truly different.
If data are at the core of these systems, to address the challenges they pose and draft some initial guidelines for their regulation, we have to turn to the field of law that most specifically deals with data and control over information, namely data protection.
Of course, some AI applications do not concern personal data, but the provisions set forth in much data protection law on data quality, data security and data management in general go beyond personal data processing and can be extended to all types of information. Moreover, the AI applications that raise the biggest concerns are those that answer societal needs (e.g. selective access to welfare or managing smart cities), which are largely based on the processing of personal data.
This correlation with data protection legislation can also be found in the ongoing debate on the regulation of AI where, both in the literature and the policy documents,Footnote 6 fair use of data,Footnote 7 right to explanation,Footnote 8 and transparent data processingFootnote 9 are put forward as barriers to potential misuse of AI.
Here we need to ask whether the existing data protection legislation with its long and successful historyFootnote 10 can also provide an effective framework for these data-intensive AI systems and mitigate their possible adverse consequences.
1.2 Rise and Fall of Individual Sovereignty Over Data Use
When in 1983 the German Constitutional Court recognised the right to self-determination with regard to data processing,Footnote 11 the judges adopted an approach that had its roots in an earlier theoretical vision outlined in the 1960s. This was the idea of individual control as a key element in respect for human personality.
This idea was framed in different ways depending on the cultural contextFootnote 12 and legal framework.Footnote 13 It also extended beyond the realm of data protection as it could relate to general personality rights however they are qualified in different legal contexts.Footnote 14 Regardless of the underpinning cultural values of data protection, the idea of an individual’s power to counter potential data misuse is in line with the European tradition of personality rights.
As with personal names, image, and privacy, for personal data too, the theoretical legal framework aims to give individuals a certain degree of sovereignty regarding the perceivable manifestation of their physical, moral and relational identity. The forms and degree in which this sovereignty is recognised will differ over time and may follow different patterns.Footnote 15
Individual sovereignty contains two components: the inside/outside boundary and the need to protect these boundaries. In personality rights and data protection, these boundaries concern the interaction between the individual and society (control) and the need for protection concerns the potential misuse of individual attributes outside the individual sphere (risk). While this does not rule out the coexistence of a collective dimension, the structure of individual rights is based on the complementary notions of control and risk.Footnote 16
This has been evident since the earliest generations of data protection regulation, which were based on the idea of control over informationFootnote 17 as a response to the risk of social control relating to the migration from dusty paper archives to computer memories.Footnote 18 Their purpose was not to spread and democratise power over information, but to increase the level of transparency about data processing and guarantee the right to access to information, providing a sort of counter-control over the collected data to the citizen.Footnote 19
In these first data protection laws we can see the context-dependent nature of this idea of control, where the prevalence of data processing in public hands and the complexity of data processing for ordinary people led regulators to focus on notification, licencing,Footnote 20 right to access and the role of independent authorities. There was no space for individual consent in this socio-technical context.
The current idea of control as mainly centred on individual consent, already common in the context of personality rights, emerges in data protection as the result of the advent of personal computers and the economic exploitation of personal information, no longer merely functional data but a core element of profiling and competitive commercial strategies.Footnote 21
These changes in the technological and business frameworks created new demands on legislators by society as citizens wished to negotiate their personal data and gain something in return.
Although the later generations of European data protection law placed personal information in the context of fundamental rights,Footnote 22 the main goal of these regulations was to pursue economic interests relating to the free flow of personal data. This is also affirmed by Directive 95/46/EC,Footnote 23 which represented both the general framework and the synthesis of this second wave of data protection laws.Footnote 24 Nevertheless, the roots of data protection still remained in the context of personality rights making the European approach less market-orientedFootnote 25 than other legal systems. The Directive also recognised the fundamental role of public authorities in protecting data subjects against unwanted or unfair exploitation of their personal information for marketing purposes.
Both the theoretical model of fundamental rights, based on self-determination, and the rising data-driven economy highlighted the importance of user consent in consumer data processing.Footnote 26 Consent was not only an expression of choice with regard to the use of personality rights by third parties, but became a means of negotiating the economic value of personal information.Footnote 27
With the advent of the digital society,Footnote 28 data could no longer be exploited for business purposes without any involvement of the data subject. Data subjects had to become part of the negotiation, since data was no longer used mainly by government agencies for public purposes, but also by private companies with monetary revenues.Footnote 29
Effective self-determination in data processing, both in terms of protection and economic exploitation of personality rights, could not be achieved without adequate awareness about data use.Footnote 30 The notice and consent modelFootnote 31 was therefore a new layer added to the existing paradigm based on transparency and access in data processing.
In the 1980s and 1990s data analysis increased in quality, but its level of complexity remained limited. Consumers understood the general correlation between data collection and the purposes of data processing (e.g. miles and points to earn free flights for airlines or nights and points for hotels) and informed consent and self-determination were largely considered synonyms.
This changed with the advent of data-intensive systems based on Big Data analytics and the new wave of AI applications which make data processing more complicated and often obscure. In addition, today’s data-intensive techniques and applications have multiplied in a new economic and technological world which raises questions about the adequacy of the legal framework – established at the end of the last millennium and having its roots in the 1970s – to safeguard individuals’ rights in the field of information technology.
The current social environment is characterised by a pervasive presence of digital technologies and an increasing concentration of information in the hands of just a few entities, both public and private. The main reason for this concentration is the central role played by specific subjects in the generation of data flows. Governments and big private companies (e.g. large retailers, telecommunication companies, etc.) collect huge amounts of data in the course of their daily activities. This mass of information represents a strategic and economically significant asset, since these large datasets enables these entities to act as gatekeepers to the information that can be extracted from datasets. They can choose to restrict access to the data to specific subjects or to circumscribed parts of the information.
Governments and big private companies are not alone in having this power, but the information intermediaries (e.g. search engines,Footnote 32 Internet providers, data brokers,Footnote 33 marketing companies), which do not themselves generate information, do play a key role in circulating it.Footnote 34
Even where the information is accessible to the public, both in raw and processed form,Footnote 35 the concurrent effect of all these different sources only apparently diminishes the concentration of power. Access to information is not equivalent to knowledge. A large amount of data creates knowledge only when the holders have the appropriate tools to select relevant information, reorganise it, place it in a systematic context and the people with the skills to design the research and interpret the results of analytics.Footnote 36
Without this, data only produces confusion and ultimately results in less knowledge, when information is subject to incomplete or biased interpretation. The mere availability of data is not sufficient in AI,Footnote 37 it is also necessary to have the adequate humanFootnote 38 and computing resources to handle it.
Control over information therefore not only regards limited access data, but can also concern open data,Footnote 39 over which the information intermediaries create added value with their analytical tools.
Given that only a few entities are able to invest heavily in equipment and research, the above dynamics sharpen the concentration of power, which has increased with the latest wave of AI.Footnote 40
In many respects, this new environment resembles the origins of data processing, the mainframe era, when technologies were held by a few entities and data processing was too complex to be understood by data subjects. Might this suggest that the future will see a sort of distributed AI, as happened with computers in the mid 1970s?Footnote 41
The position of the dominant players in AI and data-intensive systems is not only based on expensive hardware and software, which may get cheaper in the future. Nor does it depend on the growing number of staff with specific skills and knowledge, capable of interpreting the results provided by AI applications.
The fundamental basis of their power is represented by the huge datasets they possess. These data silos, considered the goldmine of the 21st century, are not freely accessible, but represent the main or collateral result of their owners’ business, creating, collecting, or managing information. Access to these databases is therefore not only protected by law, but is also strictly related to the data holders’ peculiar market positions and the presence of entry barriers.Footnote 42
This makes it hard to imagine the same process of ‘democratisation’ as occurred with computer equipment in the 1980s repeating itself today.
Another aspect that characterises and distinguishes this new concentration of control over information is the nature of the purposes of data use: data processing is no longer focused on single users (profiling), but has increased in scale to cover attitudes and behaviours of large groupsFootnote 43 and communities, even entire countries.Footnote 44
The consequence of this large-scale approach is the return of fears about social surveillance and the lack of control over important decision-making processes, which characterised the mainframe era.
At the same time, this new potentially extensive and pervasive social surveillance differs from the past, since today’s surveillance is no longer largely performed by the intelligence apparatus, which independently collects a huge amount of information through pervasive monitoring systems. It is the result of the interplay between private and public sectors,Footnote 45 based on a collaborative model made possible by mandatory disclosure orders, issued by courts or administrative bodies, and extended to an undefined pool of voluntary or proactive collaborations by big companies.Footnote 46
In this way, governments may obtain information with the indirect “co-operation” of consumers who quite probably would not have given the same information to public entities if requested. Service providers, for example, collect personal data on the basis of private agreements (privacy policies) with the consent of the user and for specific purposes,Footnote 47 but governments exploit this practice by using mandatory orders to obtain the disclosure of this information.Footnote 48 This dual mechanism hides from citizens the risk and extent of social control that can be achieved by monitoring social media or other services using data-intensive technologies.Footnote 49
In addition, the current role played by private online platforms and the environment they create, which also include traditional state activities,Footnote 50 raise further issues concerning the possibility of them having an influence on individual and collective behaviour.Footnote 51
In this scenario, the legal framework established in the 1990s to regulate data useFootnote 52 has gone to crisis, since the new technological and economic contexts (i.e. market concentration, social and technological lock-ins) have undermined its fundamental pillars,Footnote 53 which revolve around the purpose specification principle, the prior limitation of possible uses,Footnote 54 and an idea of individual self-determination mainly based on the notice and consent model.
The purpose specification and use limitation principles have their roots in the first generation of data protection regulation, introduced to avoiding extensive and indiscriminate data collection that might entail risks in terms of social surveillance and control.
In the 1980s and 1990s, with the advent of a new generation of data protection regulation, these principles not only put a limit on data processing, but also became key elements of the notice and consent model. They define the use of personal data made by data controllers, which represents important information impacting users’ choice. Nevertheless, the advent of AI applications makes it difficult to provide detailed information about the purposes of data processing and the expected outputs.
Since data-intensive systems based on AI are designed to extract hidden or unpredictable inferences and correlations from datasets, the description of these purposes is becoming more and more generic and approximate. This is a consequence of the “transformative”Footnote 55 use of data made by these systems, which often makes it impossible to explain all the possible uses of data at the time of its initial collection.Footnote 56
These critical aspects concerning the purpose specification limitation have a negative impact on the effectiveness of the idea of informational self-determination as framed by the notion of informed consent.
First, the difficulty of defining the expected results of data use leads to the introduction of vague generic statements about the purposes of data processing. Second, even where notices are long and detailed, the complexity of the AI-based environment makes it impossible for users to really understand it and make informed choices.Footnote 57
Moreover, the situation is made worse by economic, social, and technological constraints, which completely undermine the idea of self-determination with regard to personal information which represented the core principle of the generation of data protection regulation passed in the 1980s and 1990s.Footnote 58
Finally, as mentioned before, we have seen an increasing concentration of informational assets, partly due to the multinational or global nature of a few big players in the new economy, but also due to mergers and acquisitions that created large online and offline companies. In many cases, especially in IT-based services, these large-scale trends dramatically limit the number of the companies that provide certain services and which consequently have hundreds of millions of users. The size of these dominant players produces social and technological lock-in effects that accentuate data concentration and represent further direct and indirect limitations to the consumer’s self-determination and choice.Footnote 59
1.3 Reconsidering Self-determination: Towards a Safe Environment
In the above scenario, characterised by data-intensive applications and concentration of control over information, the decision to stick with a model based largely on an idea of informational self-determination centred on informed consent is critical to the effective protection of individuals and their rights.Footnote 60
This leads us to reconsider the role of user self-determination in situations where individuals are unable to understand data processing and its purposes fullyFootnote 61 or are not in a position to decide.Footnote 62 In these cases, the focus cannot be primarily on the user and self-determination but must shift to the environment. A broader view is needed, with human-centred solutions and applications where the burden of assessing the potential benefits and risks for individual rights and freedoms does not fall mainly on the shoulders of the impacted individuals or groups.
Without limiting the freedom of individuals not to be subject to AI-systems – with the exception of cases of prevailing competing interests (e.g. crime detection systems) –, these systems should provide a safe environment in terms of potential impacts on fundamental rights and freedoms. Just as customers do not have to check the safety of the cars they buy, in the same way the end users of AI systems should not have to check whether their rights and freedoms are safeguarded.
AI providers and AI systems users (e.g., municipalities in smart cities), and not end users (e.g., citizens), are in the best position to assess these risks to individual rights and freedoms and to develop or deploy AI systems with a rights-oriented design approach, under the supervision of competent and independent authorities. Furthermore, they are also in the best position to consider all the different interests of the various stakeholders with regard to extensive data collection and data mining.Footnote 63
Against this background and given the data-intensive nature of the systems involved, a first line of attack might be to consider data protection law as the reference framework for AI regulation, broadening its scope. This has been done in the literature with regard to the GDPR, focusing on open clauses such as fairness of data processingFootnote 64 or promoting the data protection impact assessment (DPIA) as a general-purpose methodology.Footnote 65
However, looking at the big picture and not just specific elements, existing data protection regulations are still focused on the traditional pillars of the so called fourth generation of data protection law:Footnote 66 the purpose specification principle, the use limitation principle and the notice and consent model (i.e. an informed, freely given and specific consent).Footnote 67
These components of data protection regulation struggle with today’s challenges, where the transformative use of dataFootnote 68 often makes it impossible to know and explain all the uses of information at the time of its initial collection, or provide detailed information about AI data processing and its internal logic.Footnote 69
The asymmetric distribution of control over information and market concentrationFootnote 70 highlighted in the previous section,Footnote 71 as well as socialFootnote 72 and technological lock-ins,Footnote 73 further undermines the idea of information self-determination in AI based mainly on the user’s conscious decision on the potential benefits and risks of data processing.Footnote 74
In addition, looking at the potential impact of AI, these data-intensive systems may affect a variety of rights and freedomsFootnote 75 that is much broader than the sphere covered by data protection. This must necessarily be reflected in the assessment methodologies which should go beyond the limited perspective adopted in today’s data protection impact assessment models, which are mainly centred on the processing, task allocation, data quality, and data security.Footnote 76
Although the EU legislator recognises data processing risks such as discrimination and “any other significant economic or social disadvantage”,Footnote 77 and recommends a broader assessment including analysis of the societal and ethical consequences,Footnote 78 Article 35 of the GDRP and the supervisory authorities’ assessment models do not adequately consider potentially impacted rights, their diversity and complexity, or the ethical and social issues.Footnote 79
Finally, the impact on society of several AI-based systems raises ethical and social issues, which have been only touched on in defining the purposes of DPIA and often poorly implemented in practice.Footnote 80
For these reasons, a holistic approach to the problems posed by AI must look beyond the traditional data protection emphasis on transparency, information, and self-determination. In the presence of complicated and often obscure AI applications, focusing on their design is key to ensuring effective safeguarding of individual rights. Such safeguards cannot simply be left to the interaction between AI manufacturers/adopters and potentially impacted individuals, given the asymmetry and bias inherent to this interaction.
Given the active and crucial role in creating a safe environment – from a legal, social and ethical perspective – of those who design, develop, deploy and adopt AI systems, it is crucial to provide them with adequate tools to consider and properly address the potential risks of AI applications for individuals and society.
1.4 A Paradigm Shift: The Focus on Risk Assessment
Risk assessment models today play an increasing role in many technology fields, including data processing,Footnote 81 as a consequence of the transformation of modern society into a risk societyFootnote 82 – or at least a society in which many activities entail exposure to risks and one that is characterised by the emergence of new risks. This has led legislators to adopt a risk-based approach in various areas of the legal governance of hazardous activities.Footnote 83
There are different assessment models possible (technology assessment, risk/benefit assessment, rights-based assessment) in different domains (e.g., legal assessment, social assessment, ethical assessment), but the first question we need to ask when defining an assessment model is whether the model is sector-specific or general. This is an important question with respect to AI too, since AI solutions are not circumscribed by a specific domain or technology.
The adoption of a technology-specific approach, for example an IoT impact assessment, a Big Data impact assessment, a smart city impact assessment seems misguided.Footnote 84 From a rights-oriented perspective, all these technologies and technology environments are relevant insofar as they interact with individuals and society, and have a potential impact on the decision-making process.
Regardless of the different software and hardware technologies used, the focus of a human-centred approach is necessarily on the rights and values to be safeguarded. The model proposed here is thus not a technological assessment,Footnote 85 but a rights-based and values-oriented assessment.
In the context of data-driven applications, an assessment model focused on a specific technology appears inadequate or only partially effective.Footnote 86 On the other hand, given the various application domains (healthcare, crime prevention, etc.), different sets of rights, freedoms and values are at stake. A sector-specific approach must therefore focus on the rights and values in question rather than the technology.
Sectoral models concentrate their attention, not on technologies, but on the context and the values that assume relevance in a given context.Footnote 87 This does not mean that the nature of the technology has no importance in the assessment process as a whole, but that it mainly regards the type and extent of the impact.
Adopting a value-oriented approach, the assessment should focus on the societal impact which includes the potential negative outcomes on a variety of fundamental rights and principles, no longer restricted to simple privacy-related risks,Footnote 88 and encompassing the ethical and social consequences of data processing.Footnote 89
A general AI impact assessment, centred on human rights,Footnote 90 ethical and societal issues, can address the call for a broader protection of individuals in the AI context and better deal with the rising demand for ethically and socially oriented AI from citizens and companies.Footnote 91
The inclusion of ethical and societal issues is consistent with the studies in the realm of collective data protectionFootnote 92 that point out the importance of these non-legal dimensions in the context of data-intensive applications.Footnote 93 Evidence in this regard comes from predictive policing software, credit scoring models and many other algorithmic decision-support systems that increasingly target groups and society at large rather than single persons, thus highlighting the group and societal scale of the potential adverse impacts.
Although the present absence of a holistic approach to risk in AI is partially filled by a variety of bottom-up initiatives, corporate guidance or ongoing public investigations, the main limitations of these initiatives concern the variety of values, approaches and models adopted.Footnote 94 Similarly the ongoing debate on AI regulation has not yet furnished a clear assessment model.Footnote 95
Against this background, the following sections sketch out a uniform model – whose components are discussed in greater detail in Chaps. 2 and 3 – which provides a common ground for an AI application assessment and, at the same time, offers sufficient flexibility to give voice to differing viewpoints.
1.5 HRESIA: A Multi-layered Process
The main components of the Human Rights, Ethical, and Social Impact Assessment (HRESIA) are the analysis of relevant human rights, the definition of relevant ethical and social values and the targeted application of these frameworks to given AI cases. The HRESIA therefore combines the universal approach of human rightsFootnote 96 with the local dimension of societal values.
The first layer of the model is based on the common values found in human rights and related process principles,Footnote 97 whose relevance has also been recognised by Data Protection Authority (DPA) jurisprudence and the courts.Footnote 98 The second layer concerns the social and ethical values which play an important role in addressing non-legal issues associated with the adoption of certain AI solutions and their acceptability, and the balance between the different human rights and freedoms, in different contexts and periods.Footnote 99
The proposed model therefore combines the human rights assessment with attention to the societal and ethical consequences,Footnote 100 but without becoming a broader social impact assessment, remaining focused on human rights. In this sense, ethical and social values are viewed through the lens of human rights and serve to go beyond the limitations of legal theory or practical implementation in effectively addressing the most urgent issues concerning the societal impacts of AI.
Moreover, ethical and social values are key to interpreting human rights in the regional context, in many cases representing the unspoken aspect of the legal reasoning behind the decisions of supervisory authorities or courts when ruling on large-scale impacting use of data.Footnote 101
One option in trying to embody this theoretical framework in an assessment tool focused on concrete cases is to follow the models already adopted in the field of data processing.Footnote 102 This is envisaged in the recent proposals concerning AI,Footnote 103 which follows a questionnaire-based approach including, in some cases, open questions concerning human rights and social issues, though with a limited level of granularity.
However, the HRESIA model follows a different approach, in which the focus on human rights exploits different tools to the focus on ethical and social issues: the first relies on questionnaires and risk assessment tools (Chap. 2), while the second is built on the use of experts to address societal challenges associated with the development and implementation of AI solutions (Chap. 3).
Questionnaires and checklists alone are not sufficient to cover the human rights, ethical and societal components of the impact assessment. They can be useful in the HRIA (Human Rights Impact Assessment) planning and scoping phase, as well as in the collection of relevant data, but this is only one part of the assessment procedure, which includes evaluation models, data analysis, and expert evaluation.Footnote 104
In the case of ethical and social issues, standardised questionnaires and checklists cannot grasp the specificities of the case, whereas experts interacting with relevant stakeholders can play a crucial role in understanding and exploring important questions. Questionnaires and checklists are just two of the possible tools to be used in fieldwork, along with focus groups, interviews, etc.Footnote 105
From a methodological standpoint, an important role is played by participationFootnote 106 which makes it possible to get a better understanding of the different competing interests and societal values.Footnote 107 Both in carrying out the assessment and in the mitigation phase – where the results of the HRESIA may suggest the engagement of specific categories of individuals –, participation can give voice to the different groups of persons potentially affected by the use of data-intensive systems and different stakeholdersFootnote 108 (e.g. NGOs, public bodies)Footnote 109 facilitating a human-centred approach to AI design.
Participation is therefore a development goal for the assessment,Footnote 110 since it reduces the risk of under-representing certain groups and may also flag up critical issues that have been underestimated or ignored.Footnote 111 However, as pointed out in risk theory,Footnote 112 participation should not become a way for decision makers to avoid their responsibilities as leaders of the entire process.Footnote 113 Decision makers, in the choice and use of AI systems, must remain committed to achieving the best results in terms of minimising the potential negative impacts of data use on individuals and society.
Finally, given the social issues that underpin the HRESIA, transparency is an essential methodological requirement of this model. Transparency is crucial for an effective participation (Chap. 3) – as demonstrated in fields where impact assessments concern the societal consequences of technology (e.g. environment impact assessments) – and is also crucial in providing potentially affected people with information to give them a better understanding of the AI risks and reduce the limitations on their self-determination.
Along the lines of risk management models, the HRESIA assessment process adopts a by-design approach from the earliest stages and is characterised by a circular approach that follows the product/service throughout its lifecycle, which is also in line with the circular product development models that focus on flexibility and interaction with users to address their needs.Footnote 114
1.6 The Role of Experts
The combination of these different layers in the model proposed here is intended to provide a self-assessment tool enabling AI system developers, deployers, and users to identify key values guiding the design and implementation of AI products and services. However, general background values and their contextual application may be not enough to address the societal changes when designing data-intensive systems. Although balanced with respect to the context, the definition of such rights and values may remain theoretical and need to be further tailored to the specific application.
To achieve a balance in specific cases, individuals with the right skills are needed to apply this set of rights and values in the given situation. The difficulty of bridging the gap between the theory of rights and values and their concrete application, given the nature of data use and the complexity of the associated risks, means that experts can play an important role in applying general principles and guidelines to a specific case (see Chap. 3).
Experts are therefore a key component of model implementation as they assist AI developers and users in this contextualisation and in applying the HRESIA benchmark values to the given case, balancing interests that may be in conflict, assessing risks and mitigating them.
The need for an expert view in data science has already been perceived by AI companies. The increasing and granular availability of data about individuals gathered from various devices, sensors, and online services enable private companies to collect huge amounts of data from which they can extract further information about individuals and groups. Private companies are therefore now more easily able to conduct large-scale social investigations, which can be classed as research activities, traditionally carried out by research bodies. This raises new issues since private firms often do not have the same ethicalFootnote 115 and scientific background as researchers in academia or research centres.Footnote 116
To address this lack of expertise, the adoption of ethical boards has been suggested, which may act at a national level, providing general guidelines, or at a company level, supporting data controllers on specific data applications.Footnote 117 Several companies have already set up ethical boards, appointed ethical advisors or adopted ethical guidelines.Footnote 118
However, these boards have a limited focus on ethical issues and do not act within a broader framework of rights and values. Such shortcomings highlight the self-regulatory nature of these solutions lacking a strong general framework that could provide a common baseline for a holistic approach to human-centred AI.
On the other hand, committees of experts within the HRESIA framework could build on the human rights framework outlined above, representing a sound and common set of values to guide expert decisions and complemented by the ethical and social values taken into account by the HRESIA.
These aspects will clearly have an influence on the selection of the experts involved. Legal expertise, an ethical and sociological background, as well as domain-specific knowledge (of data applications) are required. Moreover, the background and number of experts will also depend on the complexity of AI use.Footnote 119
The main task of the experts is to consider the specific AI use and place it in the local context, providing a tailored and more granular application of the legal and societal values underpinning the HRESIA model. In this process, the experts may decide that this contextual application of general principles and values requires the engagement of the groups of individuals potentially affected by AIFootnote 120 or institutional stakeholders. In this sense, the HRESIA is not a mere desk analysis, but takes a participatory approach – as described earlierFootnote 121 – which may be enhanced by the work of the experts involved in the HRESIA implementation.
To guarantee the transparency and the independence of these experts and their deliberations, specific procedures to regulate their activity, including stakeholder engagement should be adopted. In addition, full documentation of the decisional process should be recorded and archived for a specific period of time depending on the type of data use.
1.7 Assessing the Impact of Data-Intensive AI Applications: HRESIA Versus PIA/DPIA, SIA and EtIA
When comparing the HRESIA model with the impact assessment solutions adopted in the field of data-centred systems, the main reference is the experience gained in data protection.
The focus on the risks arising from data processing has been an essential element of data protection regulation from the outset, though over the years this risk has evolved in a variety of ways.Footnote 122 The original concern about government surveillanceFootnote 123 has been joined by new concerns regarding the economic exploitation of personal information (risk of unfair or unauthorised uses of personal informationFootnote 124) and, more recently, by the increasing number of decision-making processes based on information (risk of discrimination, large scale social surveillance, bias in predictive analysesFootnote 125).
From a theoretical perspective, this focus on the potential adverse effects of data use has not been an explicit element of data protection law. The main purpose of many of the provisions is the safeguarding of specific values, rights and freedoms (e.g. human dignity, non-discrimination, freedom of thought, freedom of expression) against potential prejudices, adopting a procedural approach that leaves in the shadows these interests, which are encapsulated in the broad and general notion of data protection.
Moreover, compared to other personality rights, such as right to image or name, data protection has a proteiform nature, as data may consist of name, numbers, behavioural information, genetic data or many other types of information. The progressive datafication of our world makes it difficult to find something that is not or cannot be transformed into data. The resulting broad notion of data protection covers different fields and has partially absorbed some elements traditionally protected by other personality rights.Footnote 126
Against this background, the idea of control over information was used to aggregate the various forms of data protection and to find a common core.Footnote 127 The procedural approach is consistent with this idea, as it secures all stages of data processing, from data collection to communication of data to third parties. Nevertheless, control over information describes the nature of the power that the law grants to the data subject, not its theoretical foundations.
In this regard, part of the legal doctrine has emphasised the role of human dignity as the cornerstone of data protection in Europe.Footnote 128 However, the interplay with the non-discrimination principleFootnote 129 and the role of data protection in the public sphere and digital citizenshipFootnote 130 suggest that a broader range of values underpin data protection.
Although, over the years, data protection regulationsFootnote 131 and practicesFootnote 132 have adopted a more explicit risk-based approach to address the varying challenges of data use, they still focus on the procedural aspects. Data management procedures therefore represent a form of risk management based on the regulation of the different stages of data processing (collection, analysis and communication) and the definition of the powers and tasks of the various actors involved in this process.
This procedural approach and the focus of risk assessment on data management have led data protection authorities to propose assessment models (Privacy Impact Assessment, PIA) primarily centred on data quality and data security, leaving aside the nature of safeguarded interests. Instead, these interests are taken into account by DPAs and courts in their decisions, but – since data protection laws provide limited explicit references to the safeguarded values, rights and freedoms – the analysis of the relevant interest is often curtailed or not adequately elaborated.Footnote 133
Data protection authorities and courts prefer arguments grounded on the set of criteria provided by data protection regulations.Footnote 134 The lawfulness and fairness of processing, transparency, purpose limitation, data minimisation, accuracy, storage limitation, data integrity and confidentiality are general principles frequently used by data protection authorities in their argumentations.Footnote 135 However, these principles are only an indirect expression of the safeguarded interests. Most of them are general clauses that may be interpreted more or less broadly and require an implicit consideration of the interests underpinning data use.
Moreover, the indefinite nature of these clauses has frequently led to the adoption of the criterion of proportionalityFootnote 136, which amounts to a synthesis of the different competing interests and rights by courts or the DPAs. In fact, this balancing of interests and the reasoning that has resulted in a precise distinction between them is often implicit in the notion of proportionality and not discussed in the decisions taken by the DPAs or only discussed in an axiomatic manner.Footnote 137
Against this scenario, it is difficult for data controllers to understand and acknowledge the set of legal and social values that they should take into account in developing their data-intensive devices and services, since these values and their mutual interaction remain unclear and undeclared. Nor is this difficulty solved by the use of PIAs, since these assessment models merely point out the need to consider aspects other than data quality and data security, without specifying them or providing effective tools to identify and enlist broader social values.
Equally, the recent requirements of the GDPR – according to the models proposed by the DPAs – fail to offer a more satisfactory answer. Despite specific references in the GDPR to the safeguarding of rights and freedoms in general as well as to societal issues,Footnote 138 the new assessment models do nothing to pay greater attention to the societal consequences than the existing PIAs.Footnote 139
The HRESIA fills this gap, providing an assessment model focused on the rights and freedoms that may be affected by data useFootnote 140 offering a more appropriate contextualisation of the various rights and freedoms that are relevant to data-intensive systems. The latter are no longer limited to data protection and should therefore be considered separately rather than absorbed in a broad notion of data protection.
Moreover, the HRESIA makes explicit the relevant social and ethical values considered in the evaluation of the system, while data protection laws, as well as proposed AI regulations, use general principles (e.g. fairness or proportionality) and general clauses (e.g. necessity, legitimacyFootnote 141) to introduce non-legal social values into the legal framework. Legal scholars have also highlighted how the application of human rights is necessarily affected by social and political influences that are not explicitly formalised in court decisions.Footnote 142
From this perspective, a HRESIA may be used to unveil the existing interplay between the legal and the societal dimensions,Footnote 143 making it explicit. It is important to reveal this cross-fertilization between law and society, without leaving it concealed between the lines of the decisions of the courts, DPAs or other bodies.
Finally, a model that considers the social and ethical dimensions also helps to democratise assessment procedures, removing them from the exclusive hands of the courts, mediated by legal formalities.
This change in the assessment analysis can have a direct positive impact on business practices. Although courts, DPAs and legal scholars are aware of the influence of societal issues on their reasoning, this is often not explicit in their decisions. Product developers are therefore unable to grasp the real sense of the existing provisions and their implementation. Stressing the societal values that should be taken into account in human rights assessment helps developers to carry out self-assessments of the potential and complex consequences of their product and services, from the early stages of product design.
Some may argue that one potential shortcoming of the proposed approach concerns the fact that it may introduce a paternalistic view to data processing. In this sense, a HRESIA model necessarily encourages system designers, developers and users to rule out certain processing operations due to their ethical or social implications, even though some end users may take a different view and consider them in line with their own values. The model may therefore be seen as a limitation of self-determination, indirectly reducing the range of available data use options.
The main pillar of this argument rests on individual self-determination, but this notion is largely undermined by today’s AI-driven data use.Footnote 144 The lack of conscious understanding in making decisions on data processing, and the frequent lack of effective freedom of choice (due to social, economic and technical lock-ins), argue for a slightly paternalistic approach as a way to offset these limitations on individual self-determination.Footnote 145 Moreover, HRESIA is not a standard but a self-assessment tool. It aims to provide a better awareness of the human rights, ethical and social implications of data use, including a bottom-up participatory approach and a context-based view, which give voice to different viewpoints.
Finally, the publicity surrounding the HRESIA (in line with the HRIA) may help to reinforce individual self-determination, as it makes explicit the implications of a certain data processing operation and fosters end users’ informed choice. Publicity increases not only the data subject’s awareness, but also the data controller’s accountability in line with a human rights-oriented approach.Footnote 146
There are cases in which full disclosure of the assessment results may be limited by the legitimate interests of the data controller, such as confidentiality of information, security, and competition. For example, the Guidelines on Big Data adopted by the Council of Europe in 2017Footnote 147 – following the opinions of legal scholarsFootnote 148 – specify that the results of the assessment proposed in the guidelines “should be made publicly available, without prejudice to secrecy safeguarded by law. In the presence of such secrecy, controllers provide any confidential information in a separate annex to the assessment report. This annex shall not be public but may be accessed by the supervisory authorities”.Footnote 149
Having highlighted the difference between PIA/DPIA and HRESIA, it is worth noting how closely HRESIA stands to the SIA (Social Impact Assessment). They share a similar focus on societal issues and the collective dimension,Footnote 150 an interest in public participation, empowerment of individuals and groups through the assessment process, attention to non-discrimination and equal participation in the assessment, accountability procedures and circular architecture. Important similarities also exist with the EtIA (Ethical Impact Assessment) modelsFootnote 151 and the focus on the ethical dimension.
However, despite the similarities, there are significant differences that set the HRESIA apart from both the PIA/DPIA and the SIA and EtIA models. The main differences concern the rationale of these models, the extent of the assessment and the way the different interests are balanced in the assessment. The HRESIA aims to provide a universal tool that, at the same time, also takes into account the local dimension of the safeguarded interests. In this sense, it is based on a common architecture grounded on intentional instruments with normative force (charters of fundamental rights). The core of the architecture is represented by human rights, which also play a role in SIA models but are not pivotal, as the SIA takes a wider approach.Footnote 152
In fact, the scope of the SIA model encompasses a wide range of issues,Footnote 153 broad theoretical categories and focuses on the specific context investigated.Footnote 154 The solutions proposed by the SIA are therefore heterogeneous and vary in different contexts,Footnote 155 making it difficult to place them within a single framework, which – on the contrary – is a key requirement in the context of the global policies on AI.
By contrast, a model grounded on human rightsFootnote 156 is more closely defined and universally applicable. Moreover, the SIA is designed for large-scale social phenomena, such as policy solutions,Footnote 157 while the HRESIA focuses on specific data-intensive AI applications.
Finally, the HRESIA is largely a rights-based assessment, in line with the approach adopted in data protection (PIA, DPIA), while both the SIA and the EtIA (Ethical Impact Assessment) are risks/benefits models.
On the comparison between HRESIA and EtIA,Footnote 158 the same considerations made with regard to SIA can be made in relation to EtIA.Footnote 159 In the forms proposed in the context of data use, there is a clearer link in the EtIA model with the ethical principles already recognised in law.Footnote 160 However, a purely ethical assessment does run the risk of overlap between ethical guidance and legal requirement.
1.8 The HRESIA and Collective Dimension of Data Use
Shifting the focus from the traditional sphere of data quality and security to fundamental rights and freedoms, the HRESIA can be of help in dealing with the emerging issues concerning the collective dimension of data processing.Footnote 161
Data-intensive applications and their use in decision-making processes impact on a variety of fundamental rights and freedoms. Not only does the risk of discrimination represent one of the biggest challenges of these applications, but other rights and freedoms also assume relevance, such as the right to the integrity of the person, to education, to equality before the law, and freedom of movement, of thought, of expression, of assembly and freedom in the workplace.Footnote 162
Against this scenario, the final question that the proposed model must address regarding its interplay with data protection concerns the compatibility of the collective dimension of data protection and the way human rights are framed by legal scholars. To answer to this question, it is necessary to highlight how the notion of collective data protection tried to go beyond the individual dimension of data protection and its focus on data quality and security, suggesting a broader range of safeguarded interests and considering individuals as a group.
An impact assessment focussing on the broader category of human rights, which also takes into account the ethical and societal issues related to data use, can provide an answer to this need. This broader perspective and the varied range of human rights makes it possible to consider the impacts of data use more fully, not only limited to the protection of personal information. Moreover, several principles, rights, and freedoms in the charters of human rights directly or indirectly address group or collective issues.
However, in the context of human rightsFootnote 163 as well as data protection, legal doctrine and the regulatory framework focus primarily on the individual dimension. Furthermore, in some cases, human rights theory provides little detail on the rights and freedoms threatened by the challenges of innovative digital technology.Footnote 164
In this regard, for example, the approach to classification adopted by modern algorithms does not merely focus on individuals and on the categories traditionally used for unfair or prejudicial treatment of different groups of people.Footnote 165 Algorithms create groups or clusters of people with common characteristics other than the traditionally protected grounds (e.g. customer habits, lifestyle, online and offline behaviour, network of personal relationships etc.). For this reason, the wide application of predictive technologies based on these new categories and their use in decision-making processes challenges the way discrimination has usually been understood.Footnote 166
Additionally, the nature of the groups created by data-intensive applications poses challenging issues from the procedural viewpoint, which concern the potential remedies to the need for collective representation in the context of algorithmic-created groups.Footnote 167 Indeed, people belonging to groups that are the traditional targets of discriminatory practices are aware of their membership of these groups and they know or may know the other members of the group. On the contrary, in the groups generated by algorithms, people do not know the other members of the group and, in many cases, are not aware of the consequences of their belonging to a group. Data subjects are not aware of the identity of the other members of the group, have no relationship with them and have a limited perception of their collective issues.
Hard law remedies in this field may not be easy to achieve in the short run and the existing or potential procedural rules often vary from one legal context to another.Footnote 168 In this scenario, an assessment tool may represent a valid alternative to address these challenges. For these reasons, a model based on a participatory approach and in which human rights are seen through the lens of ethical and social values can provide broader safeguards both in terms of the interests taken into account and the categories of individuals engaged in the process.
Finally, providing a framework for a collective and societal impact assessment of data-intensive applications is also in line with the ongoing debate on Responsible Research InnovationFootnote 169 and the demands of the data industry and product developers for practical self-assessment tools to help them address the social issues of data use. Tools should be more flexible, open to new emerging values, easily reshaped and applicable in different legal and cultural contexts. At the same time, it should be pointed out how the HRESIA model differs from the Responsible Research Innovation assessment, where the latter takes into account a variety of societal issues, which do not necessarily concern fundamental rights and freedomsFootnote 170 (e.g. interoperability, openness).Footnote 171
1.9 Advantages of the Proposed Approach
The positive features of the proposed model for assessing the impact of data use can be briefly summarised as follows:
-
The central role of human rights in HRESIA provides a universal set of values, making it suited to various legal and social contexts.
-
The HRESIA is a principle-based model, which makes it better at dealing with the rapid change of technological development, not easily addressed by detailed sets of provisions.
-
The proposed model follows in the footsteps of the data protection assessments, as a rights-based assessment in line with the PIA and DPIA approaches. However, it is broader in scope in that individual rights are properly and fully considered, coherent with their separate theoretical elaboration.
-
The HRESIA emphasises the ethical and social dimensions, giving a better understanding of the human rights implications in a given context, and as spheres to be considered independently when deciding to implement data-intensive AI-based systems affecting individuals and society.
-
By stressing ethical and social values, the HRESIA helps to make explicit the non-legal values that inform the courts and DPAs in their reasoning when they apply general data protection principles, interpret general clauses or balance conflicting interests in the context of data-intensive systems.
-
In considering ethical and social issues, this model makes it possible to give flexibility to the legal framework in dealing with AI applications. A human rights assessment that operates through the lens of ethical and social values can therefore better address the challenges of the developing digital society.
-
Finally, as an assessment tool, the HRESIA fosters the adoption of a preventive approach to product/service development from the earliest stages, favouring safeguards to rights and values, and a responsible approach to technology development.
1.10 Summary
The increasing use of AI in decision-making processes highlights the importance of examining the potential impact of AI data-intensive systems on individuals and society at large.
The consequences of data processing are no longer restricted to the well-known privacy and data protection issues but encompass prejudices against groups of individuals and a broader array of fundamental rights. Moreover, the tension between the extensive use of data-intensive systems, on the one hand, and the growing demand for ethically and socially responsible data use on the other, reveals the lack of a regulatory framework that can fully address the societal issues raised by AI technologies.
Against this background, neither traditional data protection impact assessment models (PIA and DPIA) nor the broader social or ethical impact assessment procedures (SIA and EtIA) appear to provide an adequate answer to the challenges of our algorithmic society.
While the former have a narrow focus – centred on data quality and data security – the latter cover a wide range of issues, employing broad theoretical categories and providing a variety of different solutions. A human rights-centred assessment may therefore offer a better answer to the demand for a more comprehensive assessment, including not only data protection, but also the effects of data use on other fundamental rights and freedoms (such as freedom of movement, freedom of expression, of assembly and freedom in the workplace) and related principles (such as non-discrimination).
Moreover, a human rights assessment is grounded on the charters of fundamental rights, which provide the common baseline for assessing data use in the context of global AI policies.
While the Human Rights Impact Assessment (HRIA) is not a new approach in itselfFootnote 172 and has its roots in environmental impact assessment models and development studies,Footnote 173 HRIA has not yet been systematically applied in the context of AI.Footnote 174
However, given the enormous changes to society brought by technology and datafication, when applied to the field of AI the HRIA must be enriched to consider ethical and societal issues, evolving into a more holistic model such as the proposed Human Rights, Ethical and Social Impact Assessment (HRESIA).
The HRESIA is also more closely aligned with the true intention of the EU legislator to safeguard not only the right to personal data protection, but also the fundamental rights and freedoms of natural persons.
Furthermore, ethical and social values, viewed through the lens of human rights, make it possible to overcome the limitations of the traditional human rights impact assessment and help to interpret human rights in line with the regional context. The HRESIA can in this way contribute to a universal tool that also takes the local dimension of the safeguarded interests into account.
To achieve these goals the HRESIA model combines different components, from self-assessment questionnaires to participatory tools. They help define the general value framework and place it in a local context, providing a tailored and granular application of the underlying legal and social values.
On the basis of this architecture, such an assessment tool can raise awareness among AI manufacturers, developers, and users of the impact of AI-based products/services on individuals and society. At the same time, a participatory and transparent assessment model like the HRESIA also gives individuals an opportunity for more informed choices concerning the use of their data and increases their awareness about the consequences of AI applications.
This assessment may represent an additional burden for AI industry and adopters. However, even in contexts where it is not required by law,Footnote 175 it could well gain ground in those areas where people pay greater attention to ethical and social implications of AI (healthcare, services/products for kids, etc.) or where socially oriented entities or developers’ communities are involved. Moreover, as has happened in other sectors, a greater attention to human rights and societal impacts may represent a competitive advantage for companies that deal with responsible consumers and partners.
Finally, the focus of policymakers, industry, and communities on ethical and responsible use of AI, and the lack of adequate tools to assess the impacts of AI on the fundamental rights and freedoms, as called for by the proposals under discussion in Europe,Footnote 176 also make the HRESIA a possible candidate as a mandatory assessment tool.
Notes
- 1.
Several documents have tried to provide a definition of Artificial Intelligence. See inter alia UNESCO 2021; Council of Europe, Committee of Ministers 2020; Council of Europe, Consultative Committee of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108) 2019; OECD 2019; The European Commission’s High-level Expert Group on Artificial Intelligence 2018.
- 2.
- 3.
Westin 1970.
- 4.
Pasquale 2015.
- 5.
An example is the Library of Alexandria with half a million scrolls.
- 6.
European Parliamentary Research Service 2020.
- 7.
- 8.
Wachter et al. 2018.
- 9.
- 10.
- 11.
Federal German Constitutional Court (Bundesverfassungsgericht), 15 December 1983, Neue Juristische Wochenschrift, 1984, p. 419; Rouvroy and Poullet 2009.
- 12.
Whitman 2004.
- 13.
Strömholm 1967.
- 14.
- 15.
- 16.
Solove 2008, p. 24.
- 17.
- 18.
- 19.
- 20.
Bygrave 2002, pp. 75–77.
- 21.
Although direct marketing has its roots in mail order services, which were based on personalised letters (e.g. using the name and surname of addressees) and general group profiling (e.g. using census information to group addressees into social and economic classes), the use of computer equipment increased the level of processing of consumer information and generated detailed consumer profiles. See Petrison et al. 1997, pp. 115–119; Solove 2001, pp. 1405–1407.
- 22.
Council of Europe, Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data, opened for signature on 28 January 1981 and entered into force on 1 October 1985. http://conventions.coe.int/Treaty/Commun/QueVoulezVous.asp?NT=108&CL=ENG. Accessed 27 February 2014; OECD 1980.
- 23.
Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data [1995] OJ L281/31.
- 24.
- 25.
- 26.
See Charter of Fundamental Rights of the European Union (2010/C 83/02), Article 8 [2010] C83/389. See also Productores de Música de España (Promusicae) v Telefónica de España SAU, C-275/06, para 63–64. http://curia.europa.eu/juris/liste.jsf?language=en&jur=C,T,F&num=C-275/06&td=ALL. Accessed 27 February 2014; Federal German Constitutional Court (Bundesverfassungsgericht), 15 December 1983 (fn 11). Among the legal scholars, see also Schwartz 2013; Tzanou 2013; Solove 2013.
- 27.
But see Acquisti and Grossklags 2005.
- 28.
- 29.
- 30.
The notice describes in detail how the data is processed and the purposes of the processing.
- 31.
See Articles 2(h), 7(a) and 10, Directive 95/46/EC. See also Article 29 Data Protection Working Party 2011, pp. 5–6; Article 29 Data Protection Working Party 2014a. With regard to personal information collected by public entities, the Directive 95/45/EC permits the data collection without the consent of data subject in various cases; however, the notice to data subjects is necessary in these cases. See Articles 7, 8 and 10, Directive 95/46/EC. See also Alsenoy et al. 2014; Kuner 2012, p. 5; Brownsword 2009.
- 32.
See also Sparrow et al. 2011.
- 33.
- 34.
Cannataci et al. 2016, pp. 25–29.
- 35.
This is true of open data sets made available by government agencies, information held in public registries, data contained in reports, studies and other communications made by private companies and, finally, online user-generated content.
- 36.
- 37.
- 38.
- 39.
Federal Trade Commission 2014, p. 13.
- 40.
Mantelero 2014a.
- 41.
On the risks related to “democratized big data”, Hartzog and Selinger 2013, pp. 84–85.
- 42.
- 43.
- 44.
E.g., Taylor and Schroeder 2015.
- 45.
- 46.
See also Council of Europe 2008.
- 47.
On the current relationship between data retention and access to personal information by government agencies or law enforcement authorities, Reidenberg 2014.
- 48.
- 49.
European Parliament 2013; European Parliament, Directorate General for Internal Policies, Policy Department C: Citizens’ Rights and Constitutional Affairs, Civil Liberties, Justice and Home Affairs 2013b, pp. 14–16; European Parliament, Directorate General for Internal Policies, Policy Department C: Citizens’ Rights and Constitutional Affairs, Civil Liberties, Justice and Home Affairs 2013a, pp. 12–16. See also DARPA 2002; National Research Council 2008; Congressional Research Service 2008.
- 50.
This is the case with virtual currency (Facebook Libra), public health purposes (the role of Google and Apple in contact tracing in the Covid pandemic), education (e-learning platforms).
- 51.
- 52.
See Sect. 1.1.
- 53.
- 54.
- 55.
Tene and Polonetsky 2012. Big Data analytics make it possible to collect a large amount of information from different sources and to analyse it in order to identify new trends and correlations in data sets. This analysis can be conducted to pursue purposes not defined in advance, depending on emerging correlations and different from the initial collection purposes.
- 56.
- 57.
- 58.
See Sect. 1.2.
- 59.
See above Sect. 1.2.
- 60.
Solove 2013, p. 1899.
- 61.
The Boston Consulting Group 2012, p. 4.
- 62.
See also Recital No. 43, GDPR (“In order to ensure that consent is freely given, consent should not provide a valid legal ground for the processing of personal data in a specific case where there is a clear imbalance between the data subject and the controller, in particular where the controller is a public authority and it is therefore unlikely that consent was freely given in all the circumstances of that specific situation”).
- 63.
See Chap. 3.
- 64.
- 65.
Kaminski and Malgieri 2021.
- 66.
Mayer-Schönberger 1997, pp. 219–241.
- 67.
Mantelero 2014c.
- 68.
Tene and Polonetsky 2012.
- 69.
- 70.
See Science and Technology Options Assessment 2014, pp. 94–99 and 116–121.
- 71.
See Sect. 1.2.
- 72.
The social lock-in effect is one of the consequences of the dominant position held by some big players and is most evident in the social media market. It is the incentive to remain on a network, given the numbers of connections and social relationships created and managed by the user of a social networking platform. This lock-in intrinsically limits the user’s ability to recreate the same network elsewhere, whereas a technological lock-in is due to the technological standards and data formats adopted by the service providers. The social lock-in limits the effectiveness of legal provisions concerning data portability, due to the non-technical disadvantages inherent in migrating from one service to another offering the same features.
- 73.
- 74.
- 75.
See Mantelero and Esposito 2021.
- 76.
- 77.
Recital n. 75, GDPR.
- 78.
- 79.
E.g. CNIL 2018a, b, c; Information Commissioner’s Office 2018; Information Commissioner’s Office. Data protection impact assessments https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/accountability-and-governance/data-protection-impact-assessments/. Accessed 17 August 2021; Agencia Española de Protección de Datos 2021, 2018.
- 80.
On the contrary, this multi-criteria approach is adopted in the present book, see below in the text. See also Article 29 Data Protection Working Party 2014b (“The risk-based approach goes beyond a narrow “harm-based-approach” that concentrates only on damage and should take into consideration every potential as well as actual adverse effect, assessed on a very wide scale ranging from an impact on the person concerned by the processing in question to a general societal impact (e.g. loss of social trust)”).
- 81.
See Articles 25 and 26, GDPR.
- 82.
Beck 1992.
- 83.
Ambrus 2017.
- 84.
AI Now Institute 2018.
- 85.
Skorupinski and Ott 2002.
- 86.
In some cases it is hard to define the borders between the different data processing fields and the granularity of the subject matter (e.g. the blurred confines between well-being devices/apps and medical devices).
- 87.
Specific impact assessments for Big Data analytics and for AI are not necessary, but we do need separate impact assessments for data-driven decisions in healthcare and another for smart cities, given the different values underpinning the two sectors. Whereas, for example, civic engagement and participation and equal treatment will be the driving values behind smart city technology impact assessment, in healthcare freedom of choice and the no-harm principle may play a more critical role. Differing contexts have different “architectures of values” that should be taken into account as a benchmark for the assessment models.
- 88.
- 89.
See also Skorupinski and Ott 2002, p. 101 (“Talking about risk […] is not possible without ethical considerations […] when it comes to a decision on whether risk is to be taken, obviously an orientation on norms and values is unavoidable”); United Nations – General Assembly 2021, para 26; Mantelero 2017.
- 90.
For the purposes of this book, the notions of human rights and fundamental rights are considered equivalent. See also European Union Agency for Fundamental Rights https://fra.europa.eu/en/about-fundamental-rights/frequently-asked-questions#difference-human-fundamental-rights accessed 10 January 2021 (“The term ‘fundamental rights’ is used in European Union (EU) to express the concept of ‘human rights’ within a specific EU internal context. Traditionally, the term ‘fundamental rights’ is used in a constitutional setting whereas the term ‘human rights’ is used in international law. The two terms refer to similar substance as can be seen when comparing the content in the Charter of Fundamental Rights of the European Union with that of the European Convention on Human Rights and the European Social Charter.”).
- 91.
- 92.
- 93.
See also Stahl and Wright 2018.
- 94.
- 95.
See Chap. 4.
- 96.
Referring to this universal approach, we are aware of the underlying tensions that characterise it, the process of contextualisation of these rights and freedoms (appropriation, colonisation, vernacularisation, etc.) and the theoretical debate on universalism and cultural relativism in human rights. See Levitt and Merry 2009; Benhabib 2008; Merry 2006. See also Goldstein 2007; Leve 2007; Risse and Ropp 1999; O’sullivan 1998. However, from a policy and regulatory perspective, the human rights framework, including its nuances, can provide a more widely applicable common framework than other context-specific proposals on the regulation of the impact of AI. Furthermore, the proposed methodology includes in its planning section the analysis of the human rights background, with a contextualisation based on local jurisprudence and laws, as well as the identification and engagement of potential stakeholders who can contribute to a more context-specific characterisation of the human rights framework.
- 97.
The human rights-based approach includes a number of ‘process principles’, namely: participation and inclusion, non-discrimination and equality, and transparency and accountability. See The Danish Institute for Human Rights 2020.
- 98.
Apart from the central role of privacy and data protection, a first analysis of the decisions concerning data processing reveals the crucial role played by the principles of non-discrimination, transparency and participation as well as the safeguarding of human dignity, physical integrity and identity, as well as freedom of choice, of expression, of education, and of movement. See Mantelero and Esposito 2021, section 4.
- 99.
See Chap. 3.
- 100.
See below Sect. 1.7.
- 101.
See below Sect. 1.7.
- 102.
Esposito et al. 2018.
- 103.
Independent High-Level Expert Group on Artificial Intelligence set up by the European Commission 2020.
- 104.
See Chap. 2. For an example of a human rights checklist, see the Digital Rights Check realised by the Deutsche Gesellschaft für Internationale Zusammenarbeit GmbH and The Danish Institute for Human Rights, available at https://digitalrights-check.toolkit-digitalisierung.de/. Accessed 20 March 2022.
- 105.
See Chap. 3.
- 106.
The role of participatory approaches and stakeholder engagement is specifically recognised in the context of fundamental rights. The Danish Institute for Human Rights 2020, p. 116; De Hert 2012, p. 72 (“Further case law is required to clarify the scope of the duty to study the impact of certain technologies and initiatives, also outside the context of environmental health. Regardless of the terms used, one can safely adduce that the current human rights framework requires States to organise solid decision-making procedures that involve the persons affected by technologies”).
- 107.
Participation of the different stakeholders (e.g. engagement of civil society and the business community in defining sectoral guidelines on values) can achieve a more effective result than mere transparency, although the latter has been emphasised in the recent debate on data processing. The Danish Institute for Human Rights 2020, p. 11 (“Engagement with rights-holders and other stakeholders is essential in HRIA […] Stakeholder engagement has therefore been situated as the core cross-cutting component in the Guidance and Toolbox”); Walker 2009, p. 41 (“participation is not only an end – a right – in itself, it is also a means of empowering communities to influence the policies and projects that affect them, as well as building the capacity of decision-makers to take into account the rights of individuals and communities when formulating and implementing projects and policies”). A more limited level of engagement, focused on awareness, was suggested by the Council of Europe 2018, p. 45 (“Public awareness and discourse are crucially important. All available means should be used to inform and engage the general public so that users are empowered to critically understand and deal with the logic and operation of algorithms. This can include but is not limited to information and media literacy campaigns. Institutions using algorithmic processes should be encouraged to provide easily accessible explanations with respect to the procedures followed by the algorithms and to how decisions are made. Industries that develop the analytical systems used in algorithmic decision-making and data collection processes have a particular responsibility to create awareness and understanding, including with respect to the possible biases that may be induced by the design and use of algorithms”).
- 108.
Stakeholders, unlike those groups directly affected by data processing, play a more critical role in those contexts where direct consultation may put groups at risk, due to the lack of adequate legal safeguards provided by local jurisdictions to human rights. See also Kemp and Vanclay 2013, p. 92 (“For situations where direct consultation may put groups at risk, it may be necessary to engage third parties, such as NGOs or other agencies or individuals who have worked closely with particular groups. Assessment teams must be vigilant about ensuring that individuals and groups are not put at risk by virtue of the human rights assessment itself”).
- 109.
For a different approach to participation, more oriented towards the participation of lay people in expert committees, in the context of Technology Assessment, see Skorupinski and Ott 2002, pp. 117–120.
- 110.
See also United Nations Office of the High Commissioner for Human Rights 2006.
- 111.
Wright and Mordini 2012, p. 402.
- 112.
Palm and Hansson 2006, pp. 550–551.
- 113.
See Chap. 2.
- 114.
See Chap. 2, Sect. 2.3.2 and Chap. 3. See also Manifesto for Agile Software Development http://agilemanifesto.org/, accessed 5 February 2018; Gürses and Van Hoboken 2017.
- 115.
See also Chap. 3.
- 116.
- 117.
- 118.
See Chap. 3.
- 119.
To offset the related costs, permanent expert committees might be set up by groups of enterprises or serving all SMEs in a given area.
- 120.
On the nature of these groups and its potential influence on the difficulty of engaging them in the assessment, Mantelero 2016.
- 121.
See Sect. 1.5.
- 122.
See fn 18.
- 123.
Westin 1970.
- 124.
- 125.
- 126.
See also van der Sloot 2015, pp. 25–50 (“the right to privacy has been used by the Court to provide protection to a number of matters which fall primarily under the realm of other rights and freedoms contained in the Convention”).
- 127.
- 128.
Whitman 2004.
- 129.
- 130.
Rodotà 2004.
- 131.
See Articles 24 and 35, GDPR.
- 132.
Wright and De Hert 2012.
- 133.
See, e.g., the following decisions: Garante per la protezione dei dati personali (Italian DPA), 1 February 2018, doc. web n. 8159221; Garante per la protezione dei dati personali, 8 September 2016, n. 350, doc. web 5497522; Garante per la protezione dei dati personali, 4 June 2015, n. 345, doc. web n. 4211000; Garante per la protezione dei dati personali, 8 May 2013, n. 230, doc. web n. 2433401; Agencia Española de Protección de Datos (Spanish DPA), Expediente n. 01769/2017; Agencia Española de Protección de Datos, Expediente n. 01760/2017; Agencia Española de Protección de Datos, Resolución R/01208/2014; Agencia Española de Protección de Datos, (Gabinet Juridico) Informe 0392/2011; Agencia Española de Protección de Datos, (Gabinet Juridico) Informe 368/2006; Commission de la protection de la vie privée (Belgian DPA), 15 December 2010, recommandation n. 05/2010; Commission Nationale de l’Informatique et des Libertés (French DPA), 17 July 2014, deliberation n. 2014–307; Commission Nationale de l’Informatique et des Libertés, 21 June 1994, deliberation n. 94–056.
- 134.
Regarding the focus of DPAs’ decisions on national data protection laws and their provisions, see also the results of the empirical analysis carried out by Porcedda 2017.
- 135.
See above fn 133.
- 136.
De Hert 2012, p. 46, who describes the application of the principle of proportionality as a “political” test. With regard to the jurisprudence of the European Court of Human Rights, this author also points out how “The golden trick for Strasbourg is to see almost every privacy relevant element as one that has to do with the required legal basis”.
- 137.
See e.g. Court of Justice of the European Union, 13 May 2014, Case C-131/12, Google Spain SL, Google Inc. v Agencia Española de Protección de Datos, Mario Costeja González, para 81 (“In the light of the potential seriousness of that interference, it is clear that it cannot be justified by merely the economic interest which the operator of such an engine has in that processing”, emphasis added).
- 138.
See Recital n. 75.
- 139.
For a proposed integration of PIA and EIA, see Wright and Friedewald 2013, pp. 760–762. However, these authors do not adopt a broader viewpoint focused on human rights assessment.
- 140.
Despite this difference, HRESIA and PIA/DPIA take a common approach in terms of architecture, since both are rights-based assessments. See also The Danish Institute for Human Rights 2020, p. 98 (“Human rights impacts cannot be subject to ‘offsetting’ in the same way that, for example, environmental impacts can be. For example, a carbon offset is a reduction in emissions of carbon dioxide made in order to compensate for or to offset an emission made elsewhere. With human rights impacts, on the other hand, due to the fact that human rights are indivisible and interrelated, it is not appropriate to offset one human rights impact with a ‘positive contribution’ elsewhere”).
- 141.
Bygrave 2002, pp. 61–63 and 339 on processing data for legitimate purpose (“solid grounds exist for arguing that the notion of ‘legitimate’ denotes a criterion of social acceptability, such that personal data should only be processed for purposes that do not run counter to predominant social mores […] The bulk of data protection instruments comprehend legitimacy prima facie in terms of procedural norms hinging on a criterion of lawfulness […] Very few expressly operate with a broader criterion of social justification. Nevertheless, the discretionary powers given by some national laws to national data protection authorities have enabled the latter to apply a relatively wide-ranging test of social justification”). See also New South Wales Privacy Committee 1977; Kirby 1981.
- 142.
- 143.
HRIA has its roots in Social Impact Assessment (SIA) models; Walker 2009, p. 5. Nevertheless, due to the existing interplay between human rights and social and ethical values, it is hard to define this relationship as derivation, as human rights notions necessarily affected the values adopted in SIA models. For example, the International Association for Impact Assessment Principles refers to Article 1 of the UN Declaration on the Right to Development by which every human being and all peoples are entitled to participate in, contribute to, and enjoy economic, social, cultural and political development.
- 144.
Mantelero 2014c.
- 145.
Bygrave 2002, p. 86 (“Under many European data protection regimes, paternalistic forms of control have traditionally predominated over participatory forms, though implementation of the EC Directive changes this weighting somewhat in favour of the latter”).
- 146.
Access to information is both a human right per se and a key process principle of HRIA.
- 147.
See above fn. 13.
- 148.
- 149.
- 150.
- 151.
SATORI project 2017, p. 6, defines ethical impact as the “impact that concerns or affects human rights and responsibilities, benefits and harms, justice and fairness, well-being and the social good”. Although other authors, Wright and Mordini 2012, use the acronym EIA for Ethical Impact Assessment, the different acronym EtIA is used here to avoid any confusion with the Environmental Impact Assessment, which is usually identified with the acronym EIA.
- 152.
- 153.
Burdge and Vanclay 1996, p. 59 (“Social impacts include all social and cultural consequences to human populations of any public or private actions that alter the ways in which people live, work, play, relate to one another, organize to meet their needs, and generally cope as members of society”). See also Massarani et al. 2007.
- 154.
In this sense, the ethical and social impact assessment (ESIA) is described as the outermost circle to which the PIA can be extended by Raab and Wright 2012, pp. 379–382.
- 155.
See also Svensson 2011, p. 84.
- 156.
Kemp and Vanclay 2013, pp. 90–91 (“Human rights impact assessment (HRIA) differs from SIA in the sense that it proceeds from a clear starting point of the internationally recognised rights, whereas SIA proceeds following a scoping process whereby all stakeholders (including the affected communities) nominate key issues in conjunction with the expert opinion of the assessor in terms of what the key issues might be based on experience in similar cases elsewhere and a conceptual understanding”).
- 157.
Vanclay 2006, p. 9.
- 158.
- 159.
See, e.g., with regard to stakeholder engagement Wright and Mordini 2012, p. 397 (“One of the objectives of an ethical impact assessment is to engage stakeholders in order to identify, discuss and find ways of dealing with ethical issues arising from the development of new technologies, services, projects or whatever”). See also Chap. 3.
- 160.
Wright and Mordini 2012, p. 399 (“With specific regard to values, it draws on those stated in the EU Reform Treaty, signed by Heads of State and Government at the European Council in Lisbon on 13 December 2007, such as human dignity, freedom, democracy, human right protection, pluralism, non-discrimination, tolerance, justice, solidarity and gender equality”). See also Callies et al. 2017, p. 31. For a broader analysis of ethical issue in risk assessment, see also Asveld and Roeser 2009.
- 161.
- 162.
- 163.
On the limits of an approach focused on induvial rather than on the collective dimension, Walker 2009, p. 21 (“Combatting discrimination is not simply a matter of prohibiting acts of discrimination or discriminatory legislation, but also entails an obligation on the State to take action to reverse the underlying biases in society that have led to discrimination and, where appropriate, take temporary special measures in favour of people living in disadvantaged situations so as to promote substantive equality”). See also Mitnick 2018; George 1989.
- 164.
For example, based on previous experience, discrimination is primarily viewed within the traditional categories (sex, religion, etc.). See for example Recital 71 of the GDPR on automated decision-making, which refers to “discriminatory effects on natural persons on the basis of racial or ethnic origin, political opinion, religion or beliefs, trade union membership, genetic or health status or sexual orientation”. However, groups shaped by analytics and AI differ from the traditional notion of groups in the sociological sense of the term considered by the legislation: they have a variable geometry and individuals can shift from one group to another.
- 165.
These categories, used in discriminatory practice, are to a large extent the special categories referred to in the data protection regulations.
- 166.
This notion must encompass both the prejudicial treatment of groups of people – regardless of whether they belong to special categories –, and the consequences of unintentional bias in the design, data collection and decision-making stages of data-intensive applications. Indeed, these consequences may negatively impact on individuals and society, even though they do not concern forms of discrimination based on racial or ethnic origin, political opinions, religious or philosophical beliefs or other elements that traditionally characterise minorities or vulnerable groups. For example, Kate Crawford has described the case of the City of Boston and its StreetBump smartphone app to passively detect potholes. The application had a signal problem, due to the bias generated by the low penetration of smartphones among lower income and older residents. While the Boston administration took this bias into account and solved the problem, less enlightened public officials might underestimate such considerations and make potentially discriminatory decisions. See Crawford 2013; Lerman 2013. Another example is the Progressive case, in which an insurance company obliged drivers to install a small monitoring device in their cars in order to receive the company’s best rates. The system considered as a negative factor driving late at night but did not take into account the potential bias against low-income individuals, who are more likely to work night shifts, compared with late-night party-goers, “forcing them [low-income individuals] to carry more of the cost of intoxicated and other irresponsible driving that happens disproportionately at night”, Robinson et al. 2014, pp. 18–19. Finally, commercial practices may lead to price discrimination or the adoption of differential terms and conditions depending on the assignment of consumers to a specific cluster. Thus, consumers classified as “financially challenged” belong to a cluster “[i]n the prime working years of their lives […] including many single parents, struggl[ing] with some of the lowest incomes and little accumulation of wealth”. This implies the following predictive viewpoint, based on big data analytics and regarding all consumers in the cluster: “[n]ot particularly loyal to any one financial institution, [and] they feel uncomfortable borrowing money and believe they are better off having what they want today as they never know what tomorrow will bring” (Federal Trade Commission 2014, p. 20). It is not hard to imagine the potential discriminatory consequences of similar classifications with regard to individuals and groups. See also Poort and Zuiderveen Borgesius 2021.
- 167.
See also Mantelero 2017.
- 168.
See, e.g., the case of redress procedures for the protection of consumer rights.
- 169.
Stilgoe et al. 2013, pp. 1568–1580.
- 170.
Regarding this kind of hendiadys (“fundamental rights and freedoms”), see also De Hert and Gutwirth 2004, pp. 319–320 (“legal scholars in Europe have devoted much energy in transforming or translating liberty questions into questions of ‘human rights’. One of the advantages of this ‘rights approach’ is purely strategic: it facilitates the bringing of cases before the European Court of Human Rights, a Court that is considered to have higher legal status […] There are however more reasons to think in terms of rights. It is rightly observed that the concept of human rights in legal practice is closely linked to the concept of subjective rights. Lawyers do like the idea of subjective rights. They think these offer better protection than ‘liberty’ or ‘liberties’”).
- 171.
Regarding this approach in the context of data processing, see also H2020 Virt-EU project https://virteuproject.eu/, accessed 19 December 2017.
- 172.
- 173.
Walker 2009, pp. 3–4; Massarani et al. 2007, pp. 143–149. See also Burdge and Vanclay 1996, pp. 62–64 and Ruggie 2007 (“However, the ESIA [Environmental and Social Impact Assessment] approach of studying the direct impacts of a business can miss human rights violations that are embedded in a society”).
- 174.
An early suggestion in this sense was provided by the Council of Europe 2018, p. 45 (“Human rights impact assessments should be conducted before making use of algorithmic decision-making in all areas of public administration”). More recently, proposals on AI regulation under discussion at the European Union and the Council of Europe have highlighted the importance of assessing the impact of AI applications on human rights, albeit with some limitations; see Chap. 4. See also United Nations – General Assembly 2021, paras 51–52.
- 175.
For an approach oriented toward a mandatory impact assessment for AI systems see the proposals of the European Commission and the Council of Europe on AI regulation discussed in Chap. 4.
- 176.
See Chap. 4.
References
Acquisti A, Brandimarte L, Loewenstein G (2015) Privacy and human behavior in the age of information. Science 347(6221):509–514.
Acquisti A, Grossklags J (2005) Privacy and rationality in individual decision making. Security & Privacy, IEEE 3(1):26–33.
Agencia Española de Protección de Datos (2018) Guía práctica para las evaluaciones de impacto en la protección de los datos sujetas al RGPD. https://www.aepd.es/sites/default/files/2019-09/guia-evaluaciones-de-impacto-rgpd.pdf. Accessed 4 March 2018.
Agencia Española de Protección de Datos (2021) Gestión del riesgo y evaluación de impacto en tratamientos de datos personales. https://www.aepd.es/es/node/46578. Accessed 17 August 2021.
AI Now Institute (2018) Algorithmic Impact Assessments: Toward Accountable Automation in Public Agencies. https://medium.com/@AINowInstitute/algorithmic-impact-assessments-toward-accountable-automation-in-public-agencies-bd9856e6fdde. Accessed 4 March 2018.
Ambrus M (2017) The European Court of Human Rights as Governor of Risk. In: Ambrus M, Rayfuse R, Werner W (eds) Risk and the Regulation of Uncertain in International Law. Oxford University Press, Oxford, pp 99–115.
Arai-Takahashi Y, Arai Y (2002) The Margin of Appreciation Doctrine and the Principle of Proportionality in the Jurisprudence of the ECHR. Intersentia, Antwerp.
Article 29 Data Protection Working Party (2011) Opinion 15/2011 on the definition of consent. http://ec.europa.eu/justice/policies/privacy/docs/wpdocs/2011/wp187_en.pdf. Accessed 27 February 2014.
Article 29 Data Protection Working Party (2013a) Opinion 03/2013a on purpose limitation. http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013a/wp203_en.pdf. Accessed 27 February 2014.
Article 29 Data Protection Working Party (2013b) Opinion 06/2013b on open data and public sector information (‘PSI’) reuse. http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013b/wp207_en.pdf. Accessed 27 February 2014.
Article 29 Data Protection Working Party (2014a) Opinion 06/2014a on the notion of legitimate interests of the data controller under Article 7 of Directive 95/46/EC. http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014a/wp217_en.pdf. Accessed 27 February 2014.
Article 29 Data Protection Working Party (2014b) Statement on the role of a risk-based approach in data protection legal frameworks. http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014b/wp218_en.pdf. Accessed 27 February 2014b.
Article 29 Data Protection Working Party (2017) Guidelines on Data Protection Impact Assessment (DPIA) and determining whether processing is “likely to result in a high risk” for the purposes of Regulation 2016/679. Adopted on 4 April 2017 as last revised and adopted on 4 October 2017. http://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=611236. Accessed 13 April 2018.
Asveld L, Roeser S (eds) (2009) The Ethics of Technological Risk. Earthscan, London.
Barocas S, Selbst AD (2016) Big Data’s Disparate Impact 104 (3) California Law Review 671
Beck U (1992) Risk Society: Towards a New Modernity. Sage, London.
Becker HA (2001) Social impact assessment. Eur. J. Oper. Res. 128(2):311–321.
Becker HA, Vanclay F (eds) (2003) The International Handbook of Social Impact Assessment. Conceptual and Methodological Advances. Edward Elgar, Cheltenham.
Bellagio Big DataWorkshop Participants (2014) Big data and positive social change in the developing world: A white paper for practitioners and researchers. http://www.rockefellerfoundation.org/uploads/files/c220f1f3-2e9a-4fc6-be6c-45d42849b897-big-data-and.pdf. Accessed 28 June 2015.
Benhabib S (2008) The Legitimacy of Human Rights. Daedalus 137:94–104.
Bennett CJ (1992) Regulating Privacy: Data Protection and Public Policy in Europe and the United States. Cornell University Press, Ithaca, New York.
Bennett CJ, Haggerty KD, Lyon D, Steeves V (eds) (2014) Transparent Lives Surveillance in Canada. Athabasca University Press, Edmonton.
Bloustein EJ (1977) Group Privacy: The Right to Huddle. Rut.-Cam. L. J. 8:219–283.
Bollier D (2010) The Promise and Perils of Big Data. Aspen Institute, Communications and Society Program. http://www.aspeninstitute.org/sites/default/files/content/docs/pubs/The_Promise_and_Peril_of_Big_Data.pdf. Accessed 27 February 2014.
boyd d (2012) Networked Privacy. Surv. & Soc. 10(3/4):348–350.
boyd d (2016) Untangling Research and Practice: What Facebook’s “Emotional Contagion” Study Teaches Us. Research Ethics 12:4–13.
boyd d, Crawford K (2012) Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon. Inf., Comm. & Soc. 15(5):662–679.
Brandimarte L, Acquisti A, Loewenstein G (2010) Misplaced Confidences: Privacy and the Control Paradox. Ninth Annual Workshop on the Economics of Information Security. http://www.heinz.cmu.edu/~acquisti/papers/acquisti-SPPS.pdf. Accessed 27 February 2014.
Breckenridge AC (1970) The Right to Privacy. University of Nebraska Press, Lincoln.
Brenton M (1964) The Privacy Invaders. Coward-McCann, New York.
Brown I (2012) Government access to private-sector data in the United Kingdom. Int’l Data Privacy L. 2(4):230–238.
Brown I (2013) Lawful Interception Capability Requirements. https://www.scl.org/articles/2878-lawful-interception-capability-requirements. Accessed 12 June 2016.
Brownsword R (2009) Consent in Data Protection Law: Privacy, Fair Processing and Confidentiality. In: Gutwirth S, Poullet Y, De Hert P, de Terwangne C, Nouwt S (eds) Reinventing data protection? Springer, Dordrecht, pp 83–110.
Brüggemeier G, Colombi Ciacchi A, O’Callaghan P (2010) Personality Rights in European Tort Law. Cambridge University Press, New York.
Burdge RJ, Vanclay F (1996) Social Impact Assessment: A Contribution to the State of the Art Series. Impact Assessment 14(1):59–86.
Bygrave LA (2002) Data Protection Law. Approaching Its Rationale, Logic and Limits. Kluwer Law International, The Hague/London/New York.
Callies I, Jansen P, Reijers W, Douglas D, Gurzawska A, Kapeller A, Brey P, Benčin R, Warso Z (2017) Outline of an Ethics Assessment Framework. http://satoriproject.eu/media/SATORI-FRAMEWORK-2017-05-03.pdf. Accessed 27 April 2018.
Calo RM (2013) Against Notice Skepticism in Privacy (and Elsewhere). Notre Dame L. Rev. 87(3):1027–1072.
Calo R (2014) Digital Market Manipulation. Geo. Wash. L. Rev. 82(4):995–1051.
Cannataci J (2008) Lex Personalitatis & Technology-Driven Law. SCRIPT-ed 5(1):1–6.
Cannataci JA, Zhao B, Torres Vives G, Monteleone S, Mifsud Bonnici J, Moyakine E (2016) Privacy, Free Expression and Transparency: Redefining Their New Boundaries in the Digital Age. United Nations Educational, Scientific and Cultural Organization, Paris.
Castells M (1996) The Rise of the network society. Blackwell Publishers, Cambridge, MA.
Cate FH (2006) The Failure of Fair Information Practice Principles. In Winn JK (ed.) Consumer Protection in the Age of the ‘Information Economy. Ashgate, Hampshire, pp 341–378.
Cate FH, Dempsey JX, Rubinstein IS (2012) Systematic government access to private-sector data. Int’l Data Privacy L. 2(4):195–199.
Cate FH, Mayer-Schönberger V (2013a) Data Use and Impact. Global Workshop. http://cacr.iu.edu/sites/cacr.iu.edu/files/Use_Workshop_Report.pdf. Accessed 27 February 2014
Cate FH, Mayer-Schönberger V (2013b) Notice and consent in a world of Big Data. Int’l Data Privacy L. 3(2):67–73.
Centre for European Policy Studies (2010) Global Data Transfers: The Human Rights Implications. https://www.ceps.eu/publications/global-data-transfers-human-rights-implications. Accessed 13 November 2017.
Centre for Good Governance (2006) A Comprehensive Guide for Social Impact Assessment. http://unpan1.un.org/intradoc/groups/public/documents/cgg/unpan026197.pdf. Accessed 2 May 2018.
Clifford D, Ausloos J (2018) Data Protection and the Role of Fairness. Yearbook of European Law 37:130–187.
CNIL (2018a) Privacy Impact Assessment (PIA). Knowledge Bases. https://www.cnil.fr/sites/default/files/atoms/files/cnil-pia-3-en-knowledgebases-2018a-02-19_diffusable_en_pdf_valide_jli.pdf. Accessed 28 February 2018.
CNIL (2018b) Privacy Impact Assessment (PIA). Methodology. https://www.cnil.fr/sites/default/files/atoms/files/cnil-pia-1-en-methodology.pdf. Accessed 28 February 2018.
CNIL (2018c) Privacy Impact Assessment (PIA). Templates. https://www.cnil.fr/sites/default/files/atoms/files/cnil-pia-2-en-templates.pdf. Accessed 28 February 2018.
Cohen JE (2000) Examined Lives: Informational Privacy and the Subject as an Object. Stan. L. Rev. 52:1373–1438.
Cohen JE (2013) What Privacy is For. Harv. L. Rev. 126:1904–1933.
Cohen JE (2019) Between Truth and Power: The Legal Constructions of Informational Capitalism. Oxford University Press, New York.
Committee on Commerce, Science, and Transportation (2013) A Review of the Data Broker Industry: Collection, Use, and Sale of Consumer Data for Marketing Purposes. http://educationnewyork.com/files/rockefeller_databroker.pdf. Accessed 20 February 2014.
Congressional Research Service (2008) CRS Report for Congress. Data Mining and Homeland Security: An Overview. www.fas.org/sgp/crs/homesec/RL31798.pdf. Accessed 14 December 2013.
Council of Europe (2008) Guidelines for the cooperation between law enforcement and internet service providers against cybercrime. https://rm.coe.int/16802fa3ba. Accessed 27 February 2014.
Council of Europe (2017) Guidelines on the protection of individuals with regard to the processing of personal data in a world of Big Data. https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=09000016806ebe7a. Accessed 4 May 2017.
Council of Europe (2018) Algorithms and Human Rights. Study on the Human Rights Dimensions of Automated Data Processing Techniques and Possible Regulatory Implications. https://edoc.coe.int/en/internet/7589-algorithms-and-human-rights-study-on-the-human-rights-dimensions-of-automated-data-processing-techniques-and-possible-regulatory-implications.html. Accessed 5 May 2018.
Council of Europe, Committee of Ministers (2020) Recommendation CM/Rec(2020)1 of the Committee of Ministers to member States on the human rights impacts of algorithmic systems. https://unesdoc.unesco.org/ark:/48223/pf0000377881. Accessed 24 May 2020.
Council of Europe, Consultative Committee of the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108) (2019) Guidelines on Artificial Intelligence and Data Protection, T-PD(2019)01. https://unesdoc.unesco.org/ark:/48223/pf0000377881. Accessed 15 February 2019.
Council of Europe, Expert Committee on human rights dimensions of automated data processing and different forms of artificial intelligence (MSI-AUT) (2019) Responsibility and AI. A Study of the Implications of Advanced Digital Technologies (Including AI Systems) for the Concept of Responsibility within a Human Rights Framework. Rapporteur: Karen Yeung. https://rm.coe.int/responsability-and-ai-en/168097d9c5. Accessed 11 July 2021.
Crawford K (2013) The Hidden Biases in Big Data. Harv. Bus. Rev. April 1, 2013. https://hbr.org/2013/04/the-hidden-biasesin-big-data. Accessed 29 January 2015.
Crawford K, Schultz J (2014) Big Data and Due Process: Toward a Framework to Redress Predictive Privacy Harms. B.C.L. Rev. 55(1):93–128.
DARPA (2002) Total Information Awareness Program (TIA). System Description Document (SDD), Version 1.1. http://epic.org/privacy/profiling/tia/tiasystemdescription.pdf. Accessed 14 December 2013.
De Hert P (2005) Balancing security and liberty within the European human rights framework. A critical reading of the Court’s case law in the light of surveillance and criminal law enforcement strategies after 9/11. Utrecht Law Review 1(1):68–96.
De Hert P (2012) A Human Rights Perspective on Privacy and Data Protection Impact Assessments. In: Wright D, De Hert P (eds) Privacy Impact Assessment. Springer, Dordrecht, pp 33–76.
De Hert P, Gutwirth S (2004) Rawls’ political conception of rights and liberties. An unliberal but pragmatic approach to the problems of harmonisation and globalisation. In: Van Hoecke M (ed) Epistemology and methodology of comparative law in the light of European Integration. Hart Publishing, London, pp 317–357.
Dietz T (1987) Theory and method in social impact assessment. Sociol. Inq. 57(1):54–69.
Dwork C, Mulligan DK (2013) It’s not Privacy and It’s not Fair. Stan. L. Rev. Online 66:35–40.
Esposito MS, Mantelero A, Sarale A, Thobani S, Nemorin S (2018) Deliverable 4.3. Second Report: Report to the internal members of the consortium on the PESIA methodology and initial guidelines. Project no. 732027 Horizon 2020. Values and ethics in Innovation for Responsible Technology in EUrope (VIRT-EU). https://cordis.europa.eu/project/id/732027/results/it. Accessed 16 January 2020.
European Commission, Directorate General for Communication Networks, Content and Technology (2018) A Multi-Dimensional Approach to Disinformation Report of the Independent High Level Group on Fake News and Online Disinformation. https://ec.europa.eu/digital-single-market/en/news/final-report-high-level-expert-group-fake-news-and-online-disinformation. Accessed 22 March 2018.
European Data Protection Supervisor (2014) Preliminary Opinion of the European Data Protection Supervisor. Privacy and competitiveness in the age of big data: The interplay between data protection, competition law and consumer protection in the Digital Economy. https://secure.edps.europa.eu/EDPSWEB/webdav/site/mySite/shared/Documents/Consultation/Opinions/2014/14-03-26_competitition_law_big_data_EN.pdf. Accessed 27 February 2014.
European Data Protection Supervisor, Ethics Advisory Group (2018) Towards a digital ethics. https://edps.europa.eu/sites/edp/files/publication/18-01-25_eag_report_en.pdf. Accessed 4 March 2018.
European Parliament (2013) Resolution of 4 July 2013 on the US National Security Agency surveillance programme, surveillance bodies in various Member States and their impact on EU citizens’ privacy. http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//TEXT+TA+P7-TA-2013-0322+0+DOC+XML+V0//EN. Accessed 27 February 2014.
European Parliament, Directorate General for Internal Policies, Policy Department C: Citizens’ Rights and Constitutional Affairs, Civil Liberties, Justice and Home Affairs (2013a) National Programmes for Mass Surveillance of Personal data in EU Member States and Their Compatibility with EU Law. http://www.europarl.europa.eu/committees/it/libe/studiesdownload.html?languageDocument=EN&file=98290. Accessed 27 February 2014.
European Parliament, Directorate General for Internal Policies, Policy Department C: Citizens’ Rights and Constitutional Affairs, Civil Liberties, Justice and Home Affairs (2013b) The US National Security Agency (NSA) surveillance programmes (PRISM) and Foreign Intelligence Surveillance Act (FISA) activities and their impact on EU citizens. http://info.publicintelligence.net/EU-NSA-Surveillance.pdf. Accessed 14 December 2013.
European Parliamentary Research Service (2020) The Impact of the General Data Protection Regulation (GDPR) on Artificial Intelligence. https://www.europarl.europa.eu/thinktank/en/document.html?reference=EPRS_STU(2020)641530. Accessed 12 August 2021.
Evans C, Evans S (2006) Evaluating the Human Rights Performance of Legislatures. Human Rights Law Review 6(3):545–570.
Federal Trade Commission (2014) Data Brokers: A Call for Transparency and Accountability. https://www.ftc.gov/system/files/documents/reports/data-brokers-call-transparency-accountability-report-federal-trade-commission-may-2014/140527databrokerreport.pdf. Accessed 27 February 2016.
Felzmann H, Fosch-Villaronga E, Lutz C, Tamò-Larrieux A (2019) Transparency You Can Trust: Transparency Requirements for Artificial Intelligence between Legal Norms and Contextual Concerns. Big Data & Society 6(1), 2053951719860542. https://doi.org/10.1177/2053951719860542. Accessed 11 August 2021.
Floridi L (2014) Open Data, Data Protection, and Group Privacy. Philos. Technol. 27(1):1–3.
Fritsch E, Shklovski I, Douglas-Jones R (2018) Calling for a revolution: An analysis of IoT manifestos. Proceedings of the 2018 ACM Conference on Human Factors in Computing (Montreal, Canada, 21–26 April 2018). https://doi.org/10.1145/3180000/3173876/paper302.pdf?ip=80.180.146.48&id=3173876&acc=OPEN&key=4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E4D4702B0C3E38B35%2E6D218144511F3437&__acm__=1525873755_622581693e4344f67627f0aec1be630b. Accessed 3 May 2018.
Fuster G (2014) The Emergence of Personal Data Protection as a Fundamental Right of the EU. Springer International Publishing, Cham.
George RP (1989) Individual rights, collective interests, public law, and American politics. Law and Philosophy 8:245–261.
Goldstein DM (2007) Human Rights as Culprit, Human Rights as Victim: Rights and Security in the State of Exception. In: Goodale M, Merry SE (eds) The Practice of Human Rights: Tracking Law between the Global and the Local. Cambridge University Press, Cambridge, pp 49–77.
Gostin L, Mann JM (1994) Towards the Development of a Human Rights Impact Assessment for the Formulation and Evaluation of Public Health Policies. Health and Human Rights 1(1):58–80.
Götzmann N, Vanclay F, Seier F (2016) Social and Human Rights Impact Assessments: What Can They Learn from Each Other? Impact Assessment and Project Appraisal 34(1):14–23.
Greer S (2000) The margin of appreciation: interpretation and discretion under the European Convention on Human Rights. Editions du Conseil de l’Europe, Strasbourg. https://www.echr.coe.int/LibraryDocs/DG2/HRFILES/DG2-EN-HRFILES-17(2000).pdf Accessed 18 January 2021.
Gürses S, Van Hoboken J (2017) Privacy after the Agile Turn. In Polonetsky J, Tene O, Selinger E (eds) Cambridge Handbook of Consumer Privacy. Cambridge University Press, Cambridge, pp 579–601.
Hagendorff T (2020) The Ethics of AI Ethics: An Evaluation of Guidelines. Minds and Machines 30: 99–120.
Harris D, O’Boyle M, Bates E, Buckley C (2014) Law of the European Convention on Human Rights. Oxford University Press, Oxford.
Harrison J (2011) Human rights measurement: Reflections on the current practice and future potential of human rights impact assessment. J Hum Rights Prac. 3(2): 162–187.
Harrison J, Stephenson M-A (2010) Human Rights Impact Assessment: Review of Practice and Guidance for Future Assessments. Scottish Human Rights Commission. http://fian-ch.org/content/uploads/HRIA-Review-of-Practice-and-Guidance-for-Future-Assessments.pdf. Accessed 29 November 2017.
Hartzog W, Selinger E (2013) Big Data in Small Hands. Stan. L. Rev. Online 66:81–88.
Hildebrandt M (2013) Slaves to Big Data. Or Are We? IDP: revista d’Internet, dret i política 17:27–44.
Hildebrandt M (2016) Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology. Edward Elgar Publishing, Cheltenham.
Hildebrandt M (2021) The Issue of Bias. The Framing Powers of Machine Learning. In: Pelillo M, Scantamburlo T (eds) Machines We Trust. Perspectives on Dependable AI. MIT Press, Cambridge, MA, pp 44–59.
Hoofnagle C (2003) Big Brother’s Little Helpers: How Choicepoint and Other Commercial Data Brokers Collect, Process, and Package Your Data for Law Enforcement. N.C.J. Int’l L. & Com. Reg. 29(4):595–637.
Hummel P, Braun M, Tretter M, Dabrock P (2021) Data Sovereignty: A Review. Big Data & Society 8, https://doi.org/10.1177/2053951720982012.
IEEE (2019) Ethically Aligned Design. A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems First Edition Overview. https://standards.ieee.org/content/dam/ieee-standards/standards/web/documents/other/ead1e.pdf?utm_medium=undefined&utm_source=undefined&utm_campaign=undefined&utm_content=undefined&utm_term=undefined. Accessed 21 February 2020.
Independent High-Level Expert Group on Artificial Intelligence set up by the European Commission, ‘The Assessment List For Trustworthy Artificial Intelligence (ALTAI) for Self-Assessment’ (2020). https://digital-strategy.ec.europa.eu/en/library/assessment-list-trustworthy-artificial-intelligence-altai-self-assessment. Accessed 17 July 2021.
Information Commissioner’s Office (2018) DPIA Template v0.4. https://ico.org.uk/media/for-organisations/documents/2553993/dpia-template.docx. Accessed 17 August 2021.
Jobin A, Ienca M, Vayena E (2019) The Global Landscape of AI Ethics Guidelines. Nature Machine Intelligence 1:389–399.
Kaminski ME, Malgieri G (2021) Algorithmic impact assessments under the GDPR: producing multi-layered explanations. International Data Privacy Law 11(2):125–144.
Kemp D, Vanclay F (2013) Human rights and impact assessment: clarifying the connections in practice. Impact Assessment and Project Appraisal 31(2):86–96.
Kenneally E, Bailey M, Maughan D (2010) A Framework for Understanding and Applying Ethical Principles in Network and Security Research. In: Sion R et al (eds) Financial Cryptography and Data Security. Springer, Berlin, pp 240–246.
Kirby M (1981) Transborder Data Flows and the ‘Basic Rules’ of Data Privacy. Stanford J. of Int. Law 16:27–66.
Kramer ADI, Guillory JE, Hancock JT (2014) Experimental Evidence of Massive-Scale Emotional Contagion Through Social Networks. 24 Proc. Nat’l Acad. Sci. http://www.pnas.org/content/111/24/8788.full.pdf. Accessed 12 March 2018.
Kuner C (2012) The European Commission’s Proposed Data Protection Regulation: A Copernican Revolution in European Data Protection Law. Privacy & Sec. L. Rep. 11:1–15.
Kuner C, Cate FH, Millard C, Svantesson DJB (2014) Systematic Government Access to Private-Sector Data Redux. Int’l Data Privacy L. 4(1):1–3.
Kuner C, Cate FH, Lynskey O, Millard C, Ni Loideain N, Svantesson DJB (2018) Expanding the Artificial Intelligence-Data Protection Debate. Int’l Data Privacy L. 8(4):289–292.
Lerman J (2013) Big Data and Its Exclusions. Stan. L. Rev. Online 66:55–63.
Lessig L (1999) Code and Other Laws of Cyberspace. Basic Books, New York.
Leve L (2007) “Secularism Is a Human Right!”: Double-Binds of Buddhism, Democracy, and Identity in Nepal. In: Goodale M, Merry SE (eds) The Practice of Human Rights: Tracking Law between the Global and the Local. Cambridge University Press, Cambridge, pp 78–114.
Levitt P, Merry S (2009) Vernacularization on the Ground: Local Uses of Global Women’s Rights in Peru, China, India and the United States. Global Networks 9:441–461.
Lynskey O (2015) The Foundations of EU Data Protection Law. Oxford University Press, Oxford
MacNaughton G, Hunt P (2011) A Human Rights-Based Approach to Social Impact Assessment. In: Vanclay F, Esteves AM (eds) New Directions in Social Impact Assessment. Edward Elgar, Cheltenham, https://doi.org/10.4337/9781781001196.00034.
Mahieu R (2021) The Right of Access to Personal Data: A Genealogy. Technology and Regulation 62–75.
Mantelero A (2013) Competitive value of data protection: the impact of data protection regulation on online behaviour. Int’l Data Privacy L. 3(4):229–238.
Mantelero A (2014a) Defining a New Paradigm for Data Protection in the World of Big Data Analytics. 2014a ASE BIGDATA/SOCIALCOM/CYBERSECURITY Conference, Stanford University, May 27-31, 2014. Academy of Science and Engineering, Los Angeles.
Mantelero A (2014b) Social Control, Transparency, and Participation in the Big Data World. Journal of Internet Law 17(10):23–29.
Mantelero A (2014c) The Future of Consumer Data Protection in the E.U. Re-Thinking the “Notice and Consent” Paradigm in the New Era of Predictive Analytics. Computer Law & Sec. Rev. 30(6): 643–660.
Mantelero A (2016) Personal data for decisional purposes in the age of analytics: from an individual to a collective dimension of data protection. Computer Law & Sec. Rev. 32(2):238–255.
Mantelero A (2017) Regulating Big Data. The guidelines of the Council of Europe in the Context of the European Data Protection Framework. Computer Law & Sec. Rev. 33(5):584–602.
Mantelero A, Esposito MS (2021) An Evidence-Based Methodology for Human Rights Impact Assessment (HRIA) in the Development of AI Data-Intensive Systems. Computer Law & Sec. Rev. 41, https://doi.org/10.1016/j.clsr.2021.105561.
Mantelero A, Vaciago G (2013) The “Dark Side” of Big Data: Private and Public Interaction in Social Surveillance, How data collections by private entities affect governmental social control and how the EU reform on data protection responds. Comp. L. Rev. Int’l 6:161–169.
Marsden C, Meyer T, Brown I (2020) Platform Values and Democratic Elections: How Can the Law Regulate Digital Disinformation? Computer Law & Sec. Rev. 36, https://doi.org/10.1016/j.clsr.2019.105373.
Massarani TF, Drakos MT, Pajkowska J (2007) Extracting Corporate Responsibility: Towards a Human Rights Impact Assessment. Cornell International Law Journal 40(1):135–169.
Mayer-Schönberger V (1997) Generational Development of Data Protection in Europe. In: Agre PE, Rotenberg M (eds) Technology and Privacy: The New Landscape. The MIT Press, Cambridge, pp 219–241.
Mayer-Schönberger V, Cukier K (2013) Big Data. A Revolution That Will Transform How We Live, Work and Think. John Murray, London.
Mayer-Schönberger V, Ramge T (2022) Access Rules. Freeing Data from Big Tech for a Better Future. University of California Press, Oakland.
McKinsey Global Institute (2011) Big data: The next frontier for innovation, competition, and productivity. http://www.mckinsey.com. Accessed 16 April 2012.
Merry SE (2006) Human rights and gender violence: translating international law into local justice. University of Chicago Press, Chicago.
Michaels JD (2008) All the President’s Spies: Private-Public Intelligence Partnerships in the War on Terror. California Law Review 96(4):901–966.
Miller AR (1971) The Assault on Privacy - Computers, Data Banks, Dossiers. University of Michigan Press, Ann Arbor.
Mitnick EJ (2018) Rights, Groups, and Self-Invention: Group-Differentiated Rights in Liberal Theory. Routledge, London.
Nardell G QC (2010) Levelling Up: Data Privacy and the European Court of Human Rights. In: Gutwirth S, Poullet Y, De Hert P (eds) Data Protection in a Profiled World. Springer, Dordrecht, pp 43–52.
National Research Council (2008) Protecting Individual Privacy in the Struggle Against Terrorists: A Framework for Program Assessment. National Academies Press, Washington, D.C.
Negroponte N (1994) Being digital. A. Knopf, New York.
New South Wales Privacy Committee (1977) Guidelines for the operations of personal data systems. http://www.rogerclarke.com/DV/NSWPCGs.pdf. Accessed 13 April 2018
O’sullivan D (1998) The History of Human Rights across the Regions: Universalism vs Cultural Relativism. The International Journal of Human Rights 2:22–48.
OECD (1980) Annex to the Recommendation of the Council of 23rd September 1980: Guidelines on the Protection of Privacy and Transborder Flows of Personal Data. http://www.oecd.org/internet/ieconomy/oecdguidelinesontheprotectionofprivacyandtransborderflowsofpersonaldata.htm#preface. Accessed 27 February 2014.
OECD (2013) Exploring the Economics of Personal Data: A Survey of Methodologies for Measuring Monetary Value. https://www.oecd-ilibrary.org/science-and-technology/exploring-the-economics-of-personal-data_5k486qtxldmq-en. Accessed 17 August 2021.
OECD (2019) Recommendation of the Council on Artificial Intelligence. https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449. Accessed 23 May 2019.
Packard V (1964) The Naked Society. David McKay, New York.
Palm E, Hansson SO (2006) The case for ethical technology assessment (eTA). Technological Forecasting & Social Change 73(5):543–558.
Pasquale F (2015) The Black Box Society. The Secret Algorithms That Control Money and Information. Cambridge, MA-London, Harvard University Press.
Pell SK (2012) Systematic government access to private-sector data in the United States. Int’l Data Privacy L. 2(4):245–254.
Petrison LA, Blattberg RC, Wang P (1997) Database Marketing. Past, Present, and Future. J. Direct Marketing 11(4):109–125.
Polonetsky J, Tene O, Jerome J (2015) Beyond the Common Rule: Ethical Structures for Data Research in Non-Academic Settings. Colorado Technology Law Journal 13:333–367.
Poort J, Zuiderveen Borgesius F (2021) Personalised Pricing: The Demise of the Fixed Price? In: Eisler J, Kohl U (eds) Data-Driven Personalisation in Markets, Politics and Law. Cambridge University Press, Cambridge.
Porcedda MG (2017) Use of the Charter of Fundamental Rights by National Data Protection Authorities and the EDPS. Centre for Judicial Cooperation, Robert Schuman Centre for Advanced Studies, European University Institute. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3157786. Accessed 24 April 2018
Poullet Y (2006) EU data protection policy. The Directive 95/46/EC: Ten years after. Computer Law & Sec. Rev. 22(3):206–201.
Raab C, Wright D (2012) Surveillance: Extending the Limits of Privacy Impact Assessment. In: Wright D, De Hert P (eds) Privacy Impact Assessment. Springer Netherlands, Dordrecht, pp 363–383.
Reidenberg J (2014) The Data Surveillance State in the US and Europe. Wake Forest L. Rev. 49:583–608.
Richards NM (2013) The Dangers of surveillance. Harv. L. Rev. 126:1934–1965.
Richards NM, King JH (2013) Three Paradoxes of Big Data. Stan. L. Rev. Online 66:41–46.
Risse T, Ropp SC (1999) International Human Rights Norms and Domestic Change: Conclusions. In: Sikkink K, Ropp SC, Risse T (eds) The Power of Human Rights: International Norms and Domestic Change. Cambridge University Press, Cambridge, pp 234–278.
Robinson D, Yu H, Rieke A (2014) Civil Rights, Big Data, and Our Algorithmic Future. A September 2014 report on social justice and technology. http://bigdata.fairness.io/wp-content/uploads/2014/09/Civil_Rights_Big_Data_and_Our_Algorithmic-Future_2014-09-12.pdf. Accessed 10 March 2015.
Rodotà S (2004) Privacy, Freedom, and Dignity: Conclusive Remarks at the 26th International Conference on Privacy and Personal Data Protection. https://www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/1049293#eng. Accessed 16 December 2017.
Rodotà S (2009) Data Protection as a Fundamental Right. In: Gutwirth S, Poullet Y, de Hert P, de Terwangne C, Nouwt S (eds) Reinventing Data Protection? Springer Netherlands, Dordrecht, pp 77–82.
Rotenberg M (2001) Fair Information Practices and the Architecture of Privacy (What Larry Doesn’t Get). Stan. Tech. L. Rev. 1.
Rouvroy A, Poullet Y (2009) The Right to Informational Self-Determination and the Value of Self-Development: Reassessing the Importance of Privacy for Democracy. In: Gutwirth S, Poullet Y, de Hert P, de Terwangne C, Nouwt S (eds) Reinventing Data Protection? Springer Netherlands, Dordrecht, pp 45–76.
Rubinstein IS (2013) Big Data: The End of Privacy or a New Beginning? Int’l Data Privacy L., 3 (2):74–87.
Rubinstein IS, Nojeim GT, Lee RD (2014) Systematic government access to personal data: a comparative analysis. Int’l Data Privacy L. 4(2):96–119.
Ruggie J (2007) Report of the Special Representative of the Secretary-General on the Issue of Human Rights and Transnational Corporations and Other Business Enterprises: Guiding Principles on Business and Human Rights: Implementing the United Nations “Protect, Respect and Remedy” Framework. United Nations, General Assembly, A/HRC/4/74. https://documents-dds-ny.un.org/doc/UNDOC/GEN/G07/106/14/PDF/G0710614.pdf?OpenElement. Accessed 9 October 2020.
Samuelson P (2000) Privacy as Intellectual Property? Stan. L. Rev. 52(5):1125–1173.
Sartor G (2017) Human Rights and Information Technologies. In: Brownsword R, Scotford E, Yeung K (eds) The Oxford Handbook of Law, Regulation, and Technology. Oxford University Press, Oxford, pp 424-450, https://doi.org/10.1093/oxfordhb/9780199680832.013.79.
SATORI project (2017) Ethics assessment for research and innovation — Part 2: Ethical impact assessment framework. http://satoriproject.eu/media/CWA-SATORI_part-2_WD4-20170510W.pdf. Accessed 24 April 2018
Schechter S, Bravo-Lillo C (2014) Using Ethical-Response Surveys to Identify Sources of Disapproval and Concern with Facebook’s Emotional Contagion Experiment and Other Controversial Studies. http://research.microsoft.com/pubs/220718/CURRENT%20DRAFT%20-%20Ethical-Response%20Survey.pdf. Accessed 12 March 2018.
Schwartz PM (1999) Privacy and Democracy in Cyberspace. Vanderbilt Law Review 52:1609–1701.
Schwartz PM (2004) Property, Privacy and Personal Data. Harv. L. Rev. 117(7):2056–2128.
Schwartz PM (2011) Data Protection Law and the Ethical Use of Analytics 19-21. https://www.huntonak.com/files/webupload/CIPL_Ethical_Undperinnings_of_Analytics_Paper.pdf. Accessed 27 February 2014
Schwartz PM (2013) The E.U.-US Privacy Collision: A Turn to Institutions and Procedures. Harvard Law Review 126:1966–2009.
Science and Technology Options Assessment (2014) Potential and Impacts of Cloud Computing Services and Social Network Websites. https://www.europarl.europa.eu/stoa/en/document/IPOL-JOIN_ET(2014)513546. Accessed 27 February 2014.
Secretary’s Advisory Committee on Automated Personal Data Systems (1973) Records, Computers and the Rights of Citizens. http://epic.org/privacy/hew1973report/. Accessed 27 February 2014.
Selbst AD (2017) Disparate Impact in Big Data Policing. Georgia Law Review 52(1):109–195.
Selbst AD, boyd d, Friedler SA, Venkatasubramanian S, Vertesi J (2019) Fairness and Abstraction in Sociotechnical Systems. Proceedings of the Conference on Fairness, Accountability, and Transparency (ACM 2019). https://doi.org/10.1145/3287560.3287598. Accessed 4 January 2020.
Simitis S (1987) Reviewing privacy in an information society. Pen. L. Rev. 135(3):707–746.
Simitis S (1995) From the Market to the Polis: The EU Directive on the Protection of Personal Data. Iowa L. Rev. 80:445–469.
Skorupinski B, Ott K (2002) Technology assessment and ethics. Poiesis & Praxis 1(2):95–122.
Solove DJ (2001) Privacy and Power: Computer Databases and Metaphors for Information Privacy. Stan. L. Rev. 53(6):1393–1462.
Solove DJ (2008) Understanding Privacy. Harvard University Press, Cambridge, MA/London.
Solove DJ (2013) Introduction: Privacy Self-management and The Consent Dilemma. Harv. L. Rev. 126:1880–1903.
Sparrow B, Liu J, Wegner DM (2011) Google Effects on Memory: Cognitive Consequences of Having Information at Our Fingertips. Science 333:776–778.
Stahl BC, Wright D (2018) Proactive Engagement with Ethics and Privacy in AI and Big Data - Implementing responsible research and innovation in AI-related projects. https://www.dora.dmu.ac.uk/xmlui/handle/2086/15328. Accessed 26 April 2018.
Stilgoe J, Owen R, Macnaghten P (2013) Developing a Framework for Responsible Innovation. (2013) 42(9) Research Policy, 1568–1580
Strömholm S (1967) Right of Privacy and Rights of the Personality. A Comparative Survey. Working Paper Prepared for the Nordic Conference on Privacy Organized by the International Commission of Jurists, Stockholm May 1967. https://www.icj.org/wp-content/uploads/1967/06/right-to-privacy-working-paper-publication-1967-eng.pdf. Accessed 4 May 2019.
Svensson J (2011) Social impact assessment in Finland, Norway and Sweden: a descriptive and comparative study. Thesis, KTH Royal Institute of Technology 2011. https://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-86850. Accessed 27 April 2021.
Swire P (2012) From real-time intercepts to stored records: why encryption drives the government to seek access to the cloud. Int’l Data Privacy L. 2(4):200–206.
Taylor L, Floridi L, van der Sloot B (eds) (2017) Group Privacy: New Challenges of Data Technologies. Springer International Publishing, Cham.
Taylor NC, Hobson Bryan C, Goodrich CG (1990) Social assessment: theory, process and techniques. Centre for Resource Management, Lincoln College, Lincoln.
Taylor L, Schroeder R (2015) Is Bigger Better? The Emergence of Big Data as a Tool for International Development Policy. GeoJournal 80:503–518.
Tene O, Polonetsky J (2012) Privacy in the Age of Big Data: A Time for Big Decisions. Stan. L. Rev. Online 64. https://www.stanfordlawreview.org/online/privacy-paradox-privacy-and-big-data/. Accessed 20 March 2019.
The Boston Consulting Group (2012) The value of our digital identity. http://www.libertyglobal.com/PDF/public-policy/The-Value-of-Our-Digital-Identity.pdf. Accessed 27 February 2014.
The Danish Institute for Human Rights (2020) Human rights impact assessment. Guidance and toolbox. https://www.humanrights.dk/sites/humanrights.dk/files/media/dokumenter/udgivelser/hria_toolbox_2020/eng/dihr_hria_guidance_and_toolbox_2020_eng.pdf. Accessed 25 April 2021.
The European Commission’s High-level Expert Group on Artificial Intelligence (2018) A Definition of Artificial Intelligence: Main Capabilities and Scientific Disciplines. https://ec.europa.eu/digital-single-market/en/news/definition-artificial-intelligence-main-capabilities-and-scientific-disciplines. Accessed 18 December 2018.
The White House (2012) Consumer Data Privacy in a Networked World: A Framework for Protecting Privacy and Promoting Innovation in the Global Digital Economy. https://obamawhitehouse.archives.gov/sites/default/files/privacy-final.pdf. Accessed 4 December 2017.
The White House (2015) Administration Discussion Draft: Consumer Privacy Bill of Rights Act 2015. https://obamawhitehouse.archives.gov/sites/default/files/omb/legislative/letters/cpbr-act-of-2015-discussion-draft.pdf. Accessed 25 June 2017.
The White House, Executive Office of the President (2014) Big Data: Seizing Opportunities, Preserving Values. https://obamawhitehouse.archives.gov/sites/default/files/docs/big_data_privacy_report_may_1_2014.pdf. Accessed 26 December 2014.
Turow J, Hoofnagle CJ, Mulligan DK, Good N (2007) The Federal Trade Commission and Consumer Privacy in the Coming Decade. ISJLP 3:723-749. https://lawcat.berkeley.edu/record/1121306. Accessed 27 February 2014.
Tzanou M (2013) Data protection as a fundamental right next to privacy? ‘Reconstructing’ a not so new right’. Int’l Data Privacy L. 3(2):88–99.
UNESCO (2021) Draft Text of the Recommendation on the Ethics of Artificial Intelligence, SHS/IGM-AIETHICS/2021/JUN/2. https://unesdoc.unesco.org/ark:/48223/pf0000377881. Accessed 2 July 2021.
United Nations - General Assembly (2021) Artificial Intelligence and Privacy, and Children’s Privacy. Report of the Special Rapporteur on the Right to Privacy, Joseph A. Cannataci, A/HRC/46/37. https://undocs.org/pdf?symbol=en/A/HRC/46/37. Accessed 11 August 2021.
United Nations Office of the High Commissioner for Human Rights (2006) Frequently asked questions on a human rights-based approach to development cooperation. United Nations, New York/Geneva.
Van Alsenoy B, Kosta E, Dumortier J (2014) Privacy notices versus informational self-determination: Minding the gap. Int. Rev. Law. Comp. & Tech. 28(2):185–203.
van der Sloot B (2015) Privacy as Personality Right: Why the ECtHR’s Focus on Ulterior Interests Might Prove Indispensable in the Age of “Big Data”. Utrecht Journal of International and European Law 31(80):25–50.
van Drooghenbroeck S (2001) La proportionnalité dans le droit de la Convention européenne des droits de l’homme: prendre l’idée simple au sérieux. Publications Fac St Louis, Brussels.
Vanclay F (2002) Conceptualising social impacts. Environ. Impact. Assess. 22(3):183–211.
Vanclay F (2006) Principles for Social Impact Assessment: A Critical Comparison between the International and US Documents. Environmental Impact Assessment Review 26(1):3–14.
Vanclay F, Esteves AM, Aucamp I, Franks DM (2015) Social Impact Assessment: Guidance for assessing and managing the social impacts of projects. Fargo ND: International Association for Impact Assessment. http://www.iaia.org/uploads/pdf/SIA_Guidance_Document_IAIA.pdf. Accessed 26 April 2018.
Vedder AH (1997) Privatization, Information Technology and Privacy: Reconsidering the Social Responsibilities of Private Organizations. In: Moore G (ed) Business Ethics: Principles and Practice. Business Education Publishers, Sunderland, pp 215–226.
Wachter S, Mittelstadt B, Russell C (2018) Counterfactual Explanations without Opening the Black Box: Automated Decisions and the GDPR. Harvard Journal of Law & Technology 31(2):841–887.
Wachter S, Mittelstadt B, Russell C (2021) Why Fairness Cannot Be Automated: Bridging the Gap between EU Non-Discrimination Law and AI. Computer Law & Sec. Rev. 41, https://doi.org/10.1016/j.clsr.2021.105567.
Walker S (2009) The Future of Human Rights Impact Assessments of Trade Agreements. Intersentia, Utrecht.
Westin AF (1970) Privacy and Freedom. Atheneum, New York.
Whitman JQ (2004) The Two Western Cultures of Privacy: Dignity versus Liberty. The Yale Law Journal 113:1151–1221.
World Bank and Nordic Trust Fund (2013) Human Rights Impact Assessments: A Review of the Literature, Differences with other forms of Assessments and Relevance for Development. Washington, World Bank and Nordic Trust Fund.
World Economic Forum (2013) Unlocking the Value of Personal Data: From Collection to Usage. http://www3.weforum.org/docs/WEF_IT_UnlockingValuePersonalData_CollectionUsage_Report_2013.pdf. Accessed 27 February 2014.
Wright D (2011) A framework for the ethical impact assessment of information technology. Ethics and Information Technology 13(3):199–226.
Wright D, De Hert P (eds) (2012) Privacy Impact Assessment. Springer, Dordrecht.
Wright D, Friedewald M (2013) Integrating privacy and ethical impact assessments. Science and Public Policy 40(6):755–766.
Wright D, Mordini E (2012) Privacy and Ethical Impact Assessment. In: Wright D, De Hert P (eds) Privacy Impact Assessment. Springer Netherlands, Dordrecht, pp 397–418.
Zarsky T (2016) The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making. Science, Technology, & Human Values 41(1):118–132.
Zuiderveen Borgesius F (2020) Strengthening Legal Protection against Discrimination by Algorithms and Artificial Intelligence. The International Journal of Human Rights 24(10): 1572–1593.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2022 The Author(s)
About this chapter
Cite this chapter
Mantelero, A. (2022). Beyond Data. In: Beyond Data. Information Technology and Law Series, vol 36. T.M.C. Asser Press, The Hague. https://doi.org/10.1007/978-94-6265-531-7_1
Download citation
DOI: https://doi.org/10.1007/978-94-6265-531-7_1
Published:
Publisher Name: T.M.C. Asser Press, The Hague
Print ISBN: 978-94-6265-530-0
Online ISBN: 978-94-6265-531-7
eBook Packages: Law and CriminologyLaw and Criminology (R0)