Skip to main content

Profiling and Cybersecurity: A Perspective from Fundamental Rights’ Protection in the EU

  • Chapter
  • First Online:
Legal Developments on Cybersecurity and Related Fields

Part of the book series: Law, Governance and Technology Series ((LGTS,volume 60))

  • 152 Accesses

Abstract

In this text we will be assessing to what extent personal data protection is related to the development of AI systems, as well as testing to what extent the General Data Protection Regulation (GDPR) (Regulation 2016/679 of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC.) enables the defense of individuals in the face of some AI applications—especially with regard to profiling and automated decisions. Considering the GDPR regulates the processing of data of an identified or identifiable natural person, it would apply when AI systems are building on everyone’s data, as well as when such systems are used to analyze data and produce inferences on individuals (Datatilsynet - The Norwegian Data Protection Authority, Artificial intelligence and privacy, Report: Oslo, 2018). Herein lies the problem of the opacity of inferences or predictions resulting from data analysis by AI systems—inferences whose application to everyday situations determines how each of us, as personal data subjects, are perceived and evaluated by others. It is important to assess the existence of legal remedies to challenge operations that result in automated inferences that are not reasonably justified. Thus, we intend to clarify whether the GDPR adequately protects inferred data, in the light of the fundamental right to the protection of personal data provided in Article 8 of the Charter of Fundamental Rights of the European Union (CFREU), under penalty of violating our insusceptibility to instrumentalization and objectification—in other words, human dignity itself.

This contribution was supported by the Portuguese Foundation for Science and Technology (FCT) with the reference UID/05749/2020.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 179.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Data Protection Working Party (WP29) (2017), Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (last revised and adopted on 6 February 2018). The WP29 guidelines are not binding, but they represent the shared opinion of the various data protection supervisory authorities in the EU and are persuasive before the courts and even the CJEU. The WP29 (so called because it was established under the then Article 29 of Directive 95/46 that preceded the GDPR) ceased to function on May 25, 2018, with the start of the implementation of the GDPR. It was functionally succeeded by the current European Data Protection Board—whose main task is to ensure the effective and consistent application of the GDPR and other Union data protection legislation.

  2. 2.

    Id., where it recognizes as personal data also the data obtained or inferred, such as a profile of the person, to infer a credit score from it, for example.

  3. 3.

    Incidentally, some scholars claim the “right on how to be seen”. Wachter and Mittelstadt (2019).

  4. 4.

    For the purposes of this text, the current designation of this European institution [see Article 13(1) of the Treaty on European Union (TEU)] is adopted, irrespective of the actual judicial mechanism and body competent within that Court.

  5. 5.

    See Opinion of Advocate General Priit Pikamäe, submitted on 16 March 2023, ECLI:EU:C:2023:220.

  6. 6.

    Vilaça and Silveira (2017).

  7. 7.

    This is reflected in Recital 7 of the GDPR which states that “[t]hose developments require a strong and more coherent data protection framework in the Union, backed by strong enforcement, given the importance of creating the trust that will allow the digital economy to develop across the internal market”, as well as in the Recital 10 of the GDPR, according to which “[i]n order to ensure a consistent and high level of protection of natural persons and to remove the obstacles to flows of personal data within the Union, the level of protection of the rights and freedoms of natural persons with regard to the processing of such data should be equivalent in all Member States”, and, also, in Recital 13 of the GDPR: “[i]n order to ensure a consistent level of protection for natural persons throughout the Union and to prevent divergences hampering the free movement of personal data within the internal market, a Regulation is necessary to provide legal certainty and transparency for economic operators.”

  8. 8.

    Warren and Brandeis (1890).

  9. 9.

    Breyer judgment of 19 October 2016, C-582/14, ECLI:EU:C:2016:779, recital 15: “IP addresses are series of digits assigned to networked computers to facilitate their communication over the internet. When a website is accessed, the IP address of the computer seeking access is communicated to the server on which the website consulted is stored. That connection is necessary so that the data accessed maybe transferred to the correct recipient”; La Quadrature du Net judgment of 6 October 2020, C-511/18, C-512/18 and C-520/18, ECLI:EU:C:2020:791, recital 153: “IP addresses may be used, among other things, to track an Internet user’s complete clickstream and, therefore, his or her entire online activity, that data enables a detailed profile of the user to be produced.”

  10. 10.

    Castro (2013).

  11. 11.

    Lindqvist judgment of 6 November 2003, C-101/01, ECLI:EU:C:2003:596.

  12. 12.

    Ib., recitals 35 and 36.

  13. 13.

    Google Spain judgment of 13 May 2014, C-131/12, ECLI:EU:C:2014:317.

  14. 14.

    Digital Rights judgment of 8 April 2014, C-293/12 and C-594/12, ECLI:EU:C:2014:238.

  15. 15.

    Schrems judgment of 6 October 2015, C-362/14, ECLI:EU:C:2015:650.

  16. 16.

    Wirtschaftsakademie judgment of 5 June 2018, C-210/16, ECLI:EU:C:2018:388, recital 33: “According to the documents before the Court, the data processing at issue in the main proceedings is essentially carried out by Facebook placing cookies on the computer or other device of persons visiting the fan page, whose purpose is to store information on the browsers, those cookies remaining active for 2 years if not deleted. It also appears that in practice Facebook receives, registers and processes the information stored in the cookies in particular when a person visits the Facebook services, services provided by other members of the Facebook family of companies, and services provided by other companies that use the Facebook services.” At issue in this case was whether the concept of ‘controller’ encompasses the administrator of a fan page hosted on the Facebook social network in the event of a breach of the rules on personal data protection—which was confirmed by the CJEU. However, the existence of joint liability does not necessarily translate into equivalent liability of the different operators concerned for a processing of personal data. On the contrary, they may be involved at different stages of the processing and to different degrees, so that in order to assess the level of responsibility of each one, all relevant circumstances of the case must be considered. Wirtschaftsakademie judgment, cit., recital 43.

  17. 17.

    Fashion ID judgment of 29 July 2019, C-40/17, ECLI:EU:C:2019:629. This case concerned the question, also in relation to the liability for processing, who is responsible for providing information and obtaining consent from the data subject when the administrator of a website incorporates a social module “like” from the Facebook social network—which allows the transmission of the personal data of the visitor to that website, but the administrator has no influence on the processing of the transmitted data. The CJEU held that consent must be obtained by the administrator only in respect of the personal data processing operation or set of operations the purposes and means of which are actually determined by the administrator. Fashion ID judgment, cit., recital 106.

  18. 18.

    Planet49 judgment of 1 October 2019, C-673/17, ECLI:EU:C:2019:801, recitals 45 and 67.

  19. 19.

    A cookie is a means of collecting information generated by a website and stored by the user’s browser; websites use cookies for user identification purposes, allowing the website to “remember” certain actions, characteristics or preferences of the user over time. Cookies can also be used to collect information to tailor advertising and marketing to online behaviors. Several companies use software to track users’ behavior and build personal profiles that allow them to serve ads relevant to users from their previous searches. Opinion of Advocate General Szpunar, submitted on 21 March 2019, C-673/17, ECLI:EU:C:2019:246, recitals 36–39.

  20. 20.

    However, the referring court did not ask the most relevant question for a preliminary ruling, namely whether the fact that consent to the use of cookies is conditional on participation in a promotional game would be compatible with the requirement of free consent under the GDPR. In other words, if access to the game is conditional on consent, would we be facing a manifestation of free will? The CJEU itself draws attention to the absence of this question—Planet49 judgment, cit., recital 64. In this regard, it is worth noting WP29 position, according to which consent is only valid if the data subject is able to exercise real choice without risk of intimidation, coercion or negative consequences if one chooses not to consent: “The consent mechanism should present the user with a real and meaningful choice regarding cookies on the entry page. The user should have an opportunity to freely choose between the option to accept some or all cookies or to decline all or some cookies and to retain the possibility to change the cookie settings in the future”—WP29 (2013), Working Document 02/2013 providing guidance on obtaining consent for cookies.

  21. 21.

    Planet49 judgment, cit., para 69–71.

  22. 22.

    The Advocate General suggested that the CJEU adhere to the understanding of WP29 that the consent requirement would apply to storage of and/or access to “information” of any kind. Opinion of Advocate General Szpunar, cit., recital 108 and footnote 60.

  23. 23.

    On 10 February 2021 the Council of the EU agreed on a proposal for the new “e-Privacy” Regulation that intends to replace Directive 2002/58 and establish a coherent framework between the lex specialis for the electronic communications sector and the general data protection rules now embodied in the GDPR. For a critical reading of the proposal, Cabral (2021).

  24. 24.

    The Advocate General was in no doubt that the information referred to in Article 5(3) of Directive 2002/58 constitutes personal data. Opinion of Advocate General Maciej Szpunar, cit., recital 105.

  25. 25.

    Planet49 judgment, cit., recital 69.

  26. 26.

    Guerra (2019).

  27. 27.

    Planet49 judgment, cit., recital 69–70.

  28. 28.

    Gegevensbeschermingsautoriteit judgment of 15 June 2021, C-645/19, ECLI:EU:C:2021:483. Opinion of Advocate General Michal Bobek, submitted on 13 January 2021, ECLI:EU:C:2021:5, recitals 19–21.

  29. 29.

    See Silveira (2023).

  30. 30.

    See Request for a preliminary ruling of 1 October 2021, Case C-634/21, recital 23.

  31. 31.

    See Opinion of Advocate General Priit Pikamäe, cit., recital 3.

  32. 32.

    Turing (1950).

  33. 33.

    Domingos (2017).

  34. 34.

    Bearing in mind AI inherent characteristics, its concept is dynamic. The European Commission provided a definition in 2018, according to which “(artificial intelligence) refers to systems that display intelligent behavior by analyzing their environment and taking actions—with some degree of autonomy—to achieve specific goals.” European Commission (2018a), The concept has been further developed, already in 2019, through the intervention of experts organized within the High-Level Expert Group on Artificial Intelligence (HLEG AI), established by the European Commission in June 2018: “Artificial intelligence (AI) refers to systems designed by humans that, given a complex goal, act in the physical or digital world by perceiving their environment, interpreting the collected structured or unstructured data, reasoning on the knowledge derived from this data and deciding the best action(s) to take (according to pre-defined parameters) to achieve the given goal. AI systems can also be designed to learn to adapt their behaviour by analysing how the environment is affected by their previous actions.”, HLEG AI (2019a).

  35. 35.

    Automated learning takes many forms and is known by several names: pattern recognition, statistical modeling, data mining, predictive analytics, adaptive systems, etc. On the subject, Domingos (2017).

  36. 36.

    On the digital metamorphosis, Beck (2017). The author explains that while the digital revolution still implies the clear distinction between online and offline, digital metamorphosis has to do with the essential entanglement of online and offline.

  37. 37.

    FRA - European Union Agency for Fundamental Rights (2018).

  38. 38.

    Domingos (2017).

  39. 39.

    Buttarelli (2017).

  40. 40.

    Neural networks are one of the most flexible and powerful approaches currently in use in automated learning. They reproduce in the computer the essentials of the information processing mechanism used by the human brain. In a human brain, neurons are connected through synapses. In neural networks used in automated learning, mathematical methods determine the values of these connections in order to maximize the accuracy of the match computed by the neural network—which can be used to process images, sounds, and videos. When they are used to process images, for example, each neuron in one of these networks eventually learns to recognize a certain feature of the image. Oliveira (2019).

  41. 41.

    New digital technologies have helped unravel how the brain works, how it stores and triggers memories, how it makes decisions—and thus have helped unravel how Evolution sculpted who we are. This decoding of the brain has become a global priority (as was once the mapping of the human genome), so much so that the EU has launched the “Human Brain Project” to understand the inner workings of cognition and consciousness. How much longer will it take to map the human brain? The answer to this question depends on the desired mapping accuracy—perhaps still in the 2020s. It is already possible to record memories—and not long ago it was not even known where they were stored, let alone record them and send them over the Internet. Michio Kaku, theoretical physicist at the City College New York, explains the procedure succinctly: in the center of the brain is the hippocampus, where short-term memories are stored. You can place two electrodes on either side of the hippocampus, measure the impulses traveling back and forth, and record them on a recorder. Months later, it is possible to take this record and put it back into the brain of an animal, which will remember what happened to it. The idea is to apply such technology to Alzheimer’s patients, so that in the future they will have a memory chip, which they activate simply by pushing a button, so that memories flood the hippocampus. Kaku (2018). Since it is possible to measure/sequence brain activity, and analyze someone’s thoughts in real time, how can we protect thoughts and emotions of human beings from brain manipulation? On the issue of “neuro-rights”, Debasa (2021).

  42. 42.

    On the subject Oliveira (2019). The author explains the hypothesis of emulation of the operation of a human brain, i.e., the complete simulation of a specific system (a brain) by another system (a computer), tending to discover treatments for many, if not all, the degenerative diseases that affect it. However, such a hypothesis would not be without problems, since the program doing the emulation can be duplicated and copied an arbitrary number of times, can be sent to a distant computer and be executed there, can confer virtual immortality to the emulated person, etc. In any event, the insights of António Damásio are pertinent in this context, according to which the content of the human mind depends on the organic substrate that sustains it—that is, the brain is not independent from the living organism of which it is a part. The narratives we create and the emotions we experience are not at all independent from their organic substrate. Damásio (2020).

  43. 43.

    Domingos (2017).

  44. 44.

    The legal protection offered by anti-discrimination legislation is challenged when AI systems, not humans, discriminate. Humans discriminate on the basis of negative attitudes (prejudices) sometimes unintentional (stereotypes) which signal victims when discrimination occurs. Compared to traditional forms of discrimination, the automated one is more abstract, subtle, intangible, and non-intuitive—it is harder to detect. The increasing use of learning algorithms disrupts the effectiveness of traditional legal procedures and remedies, designed for discriminations predominantly based on intuition—and for which the impact produced in the legal sphere of the person being discriminated is relevant. Therefore, the doctrine has revealed the need for the definition of a common standard of statistical evidence, aimed at detecting automated discrimination, as well as supporting legal operators, data controllers, and data subjects. Wachter et al. (2020).

  45. 45.

    Council of Europe – Committee of Experts on Internet Intermediaries (MSI-NET) (2018).

  46. 46.

    European Commission (2020a)

  47. 47.

    European Parliament (2017). In this Resolution, the European Parliament called on the European Commission to submit, on the basis of Article 114 TFEU, a legislative proposal on legal issues relating to the development and use of robotics and artificial intelligence foreseeable over the next 10–15 years. The European Parliament identified the need for efficient and up-to-date rules that correspond to technological development and to newly emerging innovations used in the market.

  48. 48.

    European Commission (2018b).

  49. 49.

    Analide and Rebelo (2019).

  50. 50.

    European Commission (2020b).

  51. 51.

    Datatilsynet - The Norwegian Data Protection Authority (2018).

  52. 52.

    Harari (2017).

  53. 53.

    Explaining how “surveillance capitalism” exploits sensitivities revealed by behavioral data—that is, how it is possible to know what a certain individual in a certain time and place thinks, feels and does—Shoshana Zuboff explains that Google’s user profile, for example, does not depend on the user voluntarily giving up information, but is inferred from analyzing their search patterns, documents and websites visited, advertisements previously selected, purchases made after viewing, and a host of other signs of online behavior. This means that the information can be provided by the user, by an authorized third-party, or it can derive from the user’s actions—which hinders curbing this market through the exercise of rights by reluctant users. The first stage of the expropriation of the behavioral surplus begins with the unilateral incursion of our defenseless space: our laptop, our cell phone, our web page, the street we live in, an e-mail to a friend, the online search for a birthday present, sharing pictures of our children, our interests and tastes, our feelings, our face. In the second stage the purpose is habituation—individuals get used to the incursion in a mixture of agreement, helplessness, and resignation; here the sense of wonder and outrage dissipates because everything seems to be inevitable. While parliamentary inquiries and court cases proceed at the slow pace of the democratic rule of law, the surveillance capitalists continue to develop their sometimes contested practices at great speed. In a third stage of the cycle, when any of the Internet giants are occasionally forced to change their practices, their executives and technicians produce superficial but tactically effective adaptations, satisfying the immediate demands of the authorities and public opinion. Then surveillance capitalism redirects the contested supply operations with new rhetoric, methods, and concepts—and everything stays just the same. Zuboff (2020).

  54. 54.

    European Commission (2020), White Paper on Artificial Intelligence – a European approach to excellence and trust, cit.

  55. 55.

    HLEG AI (2019b). Neuroscientific studies are unveiling a path to AI from the technologies of “soft robotics,” that is, a new generation of machines that feel. According to Antonio Damásio, the universe of affection was crucial for the emergence of creativity in human beings—it is the foundation for the intelligence that conscious minds have gradually developed. In this sense, if machines function according to “homeostatic feelings,” then they will be able to process and react to the conditions around them, improving the quality and efficiency of intelligent reaction. Such machines with feelings will eventually develop functional elements related to consciousness—since feelings are part of the path to consciousness—into a kind of hybrid of natural and artificial creatures that could result as effective assistants to humans. Damásio (2020).

  56. 56.

    Ethics is an academic discipline that constitutes a subdomain of philosophy. Broadly speaking, it deals with questions such as “What is a good action?”, “What is the value of a human life?”, “What is justice?” or “What is a good life?” In academic ethics, there are several areas of inquiry, among which applied ethics concerns what we are obliged (or allowed) to do in a specific (sometimes unprecedented or historically new) situation. Applied ethics deals with real-life situations, where decisions have to be made under time pressure and often with limited rationality. AI ethics is generally seen as an example of applied ethics, and focuses on the issues raised by the design, development, deployment, and use of AI. European Commission (2019).

  57. 57.

    As Shoshana Zuboff clarifies, one of the first challenges to understanding and regulating AI systems has to do precisely with the confusion between what the author understands as “surveillance capitalism” and the digital technologies it uses. Surveillance capitalism would not be the technology itself—but rather a dynamic that permeates the technology and controls its use. In other words, surveillance capitalism would be a type of marketplace unthinkable outside the digital medium, but it would not represent the digital. This distinction is important to measure the extent to which some perplexities associated with the digital market are not challengeable simply through the protection of personal data. Zuboff (2020).

  58. 58.

    Mendoza and Bygrave (2017). The authors are researchers at the University of Oslo and they raise many questions as to whether Article 22 GDPR (right not to be subject to uniquely automated decisions) will have a relevant practical impact on profiling.

  59. 59.

    Wachter and Mittelstadt (2019).

  60. 60.

    The authors define inferences as “[i]nformation relating to an identified or identifiable natural person created through deduction or reasoning rather than mere observation or collection from the data subject (…) which are created or used by data controllers or third parties, are privacy-invasive or harmful to reputation—or have a high likelihood of being so in the future—or have low verifiability in the sense of being predictive or opinion-based while being used for important decisions”. Wachter and Mittelstadt (2019), A right to reasonable inferences: re-thinking data protection law in the age of big data and AI, cit.

  61. 61.

    In 2017 the European Commission estimated that, until 2020, around 6 billion household devices (televisions, refrigerators, washing machines, etc.) would be connected to the Internet in the EU. European Commission (2017). In February of that year, the European Parliament had already alerted that these devices constituted a significant threat to privacy, due to the placement of connected devices in traditionally protected and intimate spaces, as well as their ability to extract information regarding sensitive personal data and transmit it. European Parliament (2017), European Parliament resolution with recommendations to the Commission on Civil Law Rules on Robotics, cit.

  62. 62.

    On the process of transforming bodily experiences into behavioral data, Zuboff (2020).

  63. 63.

    Analide and Rebelo (2019).

  64. 64.

    Shoshana Zuboff argues that “surveillance capitalism” claims human experience as free raw material and turns it into behavioral data. The processing of personal data tending to improve the supply of goods and services would lead to a behavioral surplus—which would be transformed in predictive products capable of anticipating our current and future actions. Such predictive products or inferences would be sold in a new kind of market for behavioral predictions—which the author refers to as behavioral futures markets—that would only incidentally deal with the less lucrative targeting of advertisements. The surveillance capitalists would be getting rich from this new business model, as numerous companies would be eager to bet on our future behavior. In this market, personal data would not be traded—because raw materials are not sold, but rather predictions resulting from the behavioral surplus. The competitive dynamics of these new markets would have encouraged the search for progressively more predictable sources of behavioral surplus: our voices/personalities/emotions. Until the surveillance capitalists discovered that they got more predictive data if they intervened in the behaviors themselves, trying to adjust, persuade, tune, and steer them to more profitable outcomes. Any agent interested in buying probabilistic information about our behavior (and/or influencing future behavior) would be able to pay to intervene in markets where individuals’ behavior is guessed at and sold. Protected by the opacity of automated processes and the sense of inevitability that emanates from them, the project of surveillance capitalism would entrench our dependence on an effective life, creating a psychic torpor that habituates us to the reality of being tracked, analyzed, and modified. Therefore, the elementary rights associated with individual autonomy, essential to the mere viability of a democratic society, would be nullified. Zuboff (2020).

  65. 65.

    WP29 (2017), Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, cit.

  66. 66.

    The Apps available online collect huge amounts of data daily from mobile devices—and process it to provide more up-to-date and appealing services to the user. However, such data may be further processed, usually to generate revenue, in ways not known or desired by the data subject. On the subject, WP29 (2013), Opinion 02/2013 on apps on smart devices, 00461/13/EN, adopted on 27 February 2013.

  67. 67.

    It is important to note that categories such as sex and age, for example, traditionally protected by EU law [Articles 8 and 10 TFEU, and Article 21(1) CFREU], are not included in this list. Nevertheless, even though they are not considered sensitive data, they do matter for the protection of inferred data, because if the AI system is trained using only or mostly male data, for example, this can lead to skewed/biased results regarding women.

  68. 68.

    According to Article 9(2) GDPR, the prohibition of processing of special categories of personal data does not apply: (a) if the data subject has given explicit consent; (b) if processing is necessary for the purposes of carrying out the obligations and exercising specific rights of the controller or of the data subject in the field of employment and social security and social protection law; (c) if processing is necessary to protect the vital interests of the data subject or of another natural person where the data subject is physically or legally incapable of giving consent; (d) if processing is carried out in the course of its legitimate activities with appropriate safeguards by a foundation, association or any other not-for-profit body with a political, philosophical, religious or trade union aim; (e) if processing relates to personal data which are manifestly made public by the data subject; (f) if processing is necessary for the establishment, exercise or defence of legal claims or whenever courts are acting in their judicial capacity; (g) if processing is necessary for reasons of substantial public interest; (h) processing is necessary for the purposes of preventive or occupational medicine (…) or pursuant to contract with a health professional; (i) if processing is necessary for reasons of public interest in the area of public health; (j) if processing is necessary for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes (…).

  69. 69.

    Examples abound of sensitive data inferred from opinions/preferences voluntarily provided on social networks. WP29 refers to a U.S. study published in 2012 [Kosinski et al. (2012), Private traits and attributes are predictable from digital records of human behaviour], according to which it would be possible, from Facebook “likes”, to infer the sexual orientation of a male user in 88% of cases, the ethnic origin of a user in 95% of cases, and whether a user is Christian or Muslim in 82% of cases.

  70. 70.

    According to Recital 78 GDPR, “[i]n order to be able to demonstrate compliance with this Regulation, the controller should adopt internal policies and implement measures which meet in particular the principles of data protection by design and data protection by default.” In this sense, “the controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, (…)” [Article 25(1) GDPR] to ensure “that, by default, only personal data which are necessary for each specific purpose of the processing are processed. That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default personal data are not made accessible without the individual’s intervention to an indefinite number of natural persons” [Article 25(2) GPDR]. Faced with the vicissitudes of explainability under the GDPR (“meaningful information about the logic involved”), some are relying on the principles of data protection by design and data protection by default as the feasible solution for more explainable and responsible algorithms. Edwards and Veale (2017). Underlying this is the idea that the development of AI requires that all those involved in the development and marketing of such systems take legal responsibility for the quality of the technology they produce at all stages of the process.

  71. 71.

    As the Articles of the GDPR provide for a right to information and not exactly a right to explanation (as stated in Recital 71: “to obtain an explanation of the decision reached”), the doctrine is divided as to whether there is a right to explanation under the GDPR. On the subject, Wachter et al. (2017); Cabral (2020). The author concludes that several Articles of the GDPR would underpin the right to explanation, i.e., not only the right to information (Articles 13 and 14), but also the right of access (Article 15 GDPR) and the right not to be subject to exclusively automated decisions (Article 22 GDPR).

  72. 72.

    Recital 63 of the GDPR specifies that, where possible, the controller should provide access to a secure electronic system that enables the data subject to directly access their personal data. Recital 63 further ensures a certain level of protection for controllers who may be concerned about the possibility of revealing commercial secrets or intellectual property, which may be relevant to profiling. According to this recital, the right of access “(…) should not adversely affect the rights or freedoms of others, including trade secrets or intellectual property and in particular the copyright protecting the software.” However, the WP29 warns that controllers cannot invoke the protection of their trade secrets as a pretext to deny access or refuse to provide information to the data subject. WP29 (2017), Guidelines on automated individual decision-making and profiling for the purposes of Regulation 2016/679, cit. Additionally, the doctrine has been advocating that the high levels of precision of the data mining and machine learning techniques have nothing to with the software, because it is the raw data and not the software that drives operation. Analide and Rebelo (2019). In any event, and bearing in mind the intellectual property rights dimension included in Article 17 CFREU, a resulting concrete conflict must be dealt with using the method of Article 52(1) CFREU—which establishes a general clause restricting the exercise of fundamental rights—considering the essential content of the fundamental right in question as well as the principle of proportionality. On the subject, FRA - European Union Agency for Fundamental Rights (2020).

  73. 73.

    As for the first exception to the prohibition in Article 22(1) GDPR for contractual (including pre-contractual) reasons, the controller must be able to demonstrate the necessity of the processing [Article 22(2)(a)] by assessing whether another equally effective and less intrusive method of achieving the same purpose could be adopted—if any, the automated processing alone would not be necessary.

  74. 74.

    Recital 71 GDPR contextualizes such a possibility [provided for in Article 22(2)(b) GDPR] and explains that the automated operation: “(...) should be allowed where expressly authorised by Union or Member State law to which the controller is subject, including for fraud and tax-evasion monitoring and prevention purposes conducted in accordance with the regulations, standards and recommendations of Union institutions or national oversight bodies and to ensure the security and reliability of a service provided by the controller.”

  75. 75.

    Strictly speaking, Article 22(3) GDPR provides that in the cases referred to in Article 22(2)(a) and (c), i.e., contractual necessity and explicit consent, the controller shall implement suitable measures to safeguard the data subject’s rights, namely to obtain human intervention on the part of the controller, to express its point of view and to contest the decision. The exclusion of Article 22(2)(b) GDPR—i.e., the exception according to which such prohibition does not apply if the decision is authorized by Union or Member State law to which the controller is subject—could indicate a willingness of the legislator not to provide for appropriate measures when legal authorization is involved. This would be a solution of very doubtful compatibility with the fundamental right to the protection of personal data in Article 8 CFREU. Moreover, recital 71 GDPR suggests a general application: “[i]n any case, such processing should be subject to suitable safeguards”. So, not being that recital contradictory to the spirit of Article 22 GDPR—i.e., to protect the data subject from the dangers of decision-making based purely on automation—there does not seem to be a compelling reason to exempt processing based on legal authorization from the appropriate measures of Article 22(3) (obtain human intervention, express view, and challenge the decision). Thus, the aim of the legislature is (can only be) to prevent decision-making from taking place without individual assessment and evaluation by a human being. So, it is worth recalling the case law of the CJEU according recitals: (i) even if recitals are not mandatory and cannot be invoked to derogate from the very provisions of the legal act in question (Nilsson judgment of 19 November 1998, C-162/97, ECLI:EU:C:1998:554, recital 54); (ii) recitals have legal force and clarify the scope and purpose of legislative provisions (Lindqvist judgment of 6 November 2003, C-101/01, ECLI:EU:C:2003:596, recital 79, and C judgment of 27 November 2007, C-435/06, ECLI:EU:C:2007:714, recitals 31, 48 and 52); (iii) recitals are powerful interpretative tools that allow to clarify the meaning of binding provisions of an EU legal act (Casa Fleischhandels-GmbH judgment of 13 July 1989, 215/88, ECLI:EU:C:1989:331, recital 31).

  76. 76.

    Regarding significant information on the logic applied, the Spanish Data Protection Agency clarifies that it allows to understand or know something precisely, or that it should be interpreted as information that, provided to the data subject, makes them aware of the type of processing that is being carried out with their data and provides them with certainty and confidence about its results. Agencia Española Protección Datos (2020).

  77. 77.

    A more exhaustive array is provided by the Spanish Data Protection Agency, which explains that fulfilling this obligation by providing a technical reference to the implementation of the algorithm can be opaque, confusing, and even lead to information fatigue. Information should be provided to enable understanding of the processing behavior. Although it will depend on the type of AI component used, an example of information that could be relevant to the data subject would be: the detail of the data used for decision making, beyond the category, and in particular information on the time frame of use of the data; the relative importance of each data in decision making; the quality of the training data and the type of patterns used; the profiling performed and its implications; accuracy or error values according to the appropriate metric to measure the soundness of the inference; the existence or not of qualified human supervision; the reference to audits, especially on possible deviations of the results of the inferences, as well as the certification or certifications performed on the AI system. In the case of adaptive or evolutionary systems, the last audit performed; in the case that the AI system contains information of identifiable third parties, the prohibition to treat such information without legitimacy and the consequences of doing so. Id.

  78. 78.

    According to Recital 60 GDPR, “[t]he controller should provide the data subject with any further information necessary to ensure fair and transparent processing taking into account the specific circumstances and context in which the personal data are processed. Furthermore, the data subject should be informed of the existence of profiling and the consequences of such profiling.”

  79. 79.

    The guidelines conceived within the Council of Europe on the subject are particularly fortunate: “Data subjects should be entitled to know the reasoning underlying the processing of data, including the consequences of such a reasoning, which led to any resulting conclusions, in particular in cases involving the use of algorithms for automated-decision making including profiling. For instance in the case of credit scoring, they should be entitled to know the logic underpinning the processing of their data and resulting in a “yes” or “no” decision, and not simply information on the decision itself. Having an understanding of these elements contributes to the effective exercise of other essential safeguards such as the right to object and the right to complain to a competent authority.”. Council of Europe (2018), Explanatory Report to the Protocol amending the Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (the Explanatory Report of Convention 108).

  80. 80.

    On the subject, HLEG AI (2019b).

  81. 81.

    Analide and Rebelo (2019). The authors explain that the processing operations within AI make use of analytical models whose approximate predictions externalize fuzzy arguments that accept different degrees of truth (almost, maybe, somewhat) and not just the distinction between truth and falsehood.

  82. 82.

    The right of access is materialized by Article 15 GDPR, so that the data subjects have access to their data in order to be clarified about: “(a) the purposes of the processing; (b) the categories of personal data concerned; (c) the recipients or categories of recipient to whom the personal data have been or will be disclosed (…); (d) (…) the envisaged period for which the personal data will be stored (…); (e) the existence of the right to request from the controller rectification or erasure of personal data (…); (f) the right to lodge a complaint with a supervisory authority; (g) where the personal data are not collected from the data subject, any available information as to their source; (h) (…) meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.”

  83. 83.

    Wachter and Mittelstadt (2019).

  84. 84.

    On the subject, FRA - European Union Agency for Fundamental Rights (2020), in which the Agency identifies the main challenges posed to the fundamental right to effective judicial protection by operations based on AI systems. The report concludes that only the explainability of such operations guarantees the injured party the possibility of appearing before a court, alleging the facts that effectively embody the violation of a right.

References

  • Agencia Española Protección Datos (2020) Adecuación al RGPD de tratamientos que incorporan Inteligencia Artificial. Una introducción, Report

    Google Scholar 

  • Analide C, Rebelo D (2019) Inteligência artificial na era data-driven, a lógica fuzzy das aproximações soft computing e a proibição de sujeição a decisões tomadas exclusivamente com base na exploração e prospeção de dados pessoais, Fórum de proteção de dados, No. 6. Comissão Nacional de Proteção de Dados, Lisbon

    Google Scholar 

  • Beck U (2017) A metamorfose do mundo 70. Edições, Lisbon

    Google Scholar 

  • Buttarelli G (2017) Inteligência artificial, robótica, privacidade e proteção de dados, Fórum de proteção de dados, No. 4. Comissão Nacional de Proteção de Dados, Lisbon

    Google Scholar 

  • Cabral T (2020) AI and the right to explanation: three legal bases under the GDPR. Computers, privacy & data protection. Hart Publishing, Oxford

    Google Scholar 

  • Cabral T (2021) The Council’s position regarding the proposal for the e-Privacy Regulation: out of the frying pan and into the fire?, The official blog of UNIO (https://officialblogofunio.com/)

  • Castro C (2013) Artigo 8.° - Proteção de dados pessoais. In: Silveira A, Canotilho M (eds) Carta dos Direitos Fundamentais da União Europeia Comentada. Almedina, Coimbra

    Google Scholar 

  • Council of Europe – Committee of Experts on Internet Intermediaries (MSI-NET) (2018) Algorithms and human rights – Study on the human rights dimensions of automated data processing techniques (in particular algorithms) and possible regulatory implications, DGI(2017)12, Strasburg

    Google Scholar 

  • Damásio A (2020) Sentir e saber – a caminho da consciência. Temas & Debates, Círculo de Leitores, Lisbon

    Google Scholar 

  • Datatilsynet - The Norwegian Data Protection Authority (2018) Artificial intelligence and privacy, Report, Oslo

    Google Scholar 

  • Debasa F (2021) Neuro-rights, The official blog of UNIO (https://officialblogofunio.com/)

  • Domingos P (2017) A revolução do algoritmo mestre. Como a aprendizagem automática está a mudar o mundo. Letras & Diálogos, Lisbon

    Google Scholar 

  • Edwards L, Veale M (2017) Slave to the algorithm? Why a ‘right to an explanation’ is probably not the remedy you are looking for. Duke Law Technol Rev 16:18

    Google Scholar 

  • European Commission (2017) Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of Regions on the mid-term review on the implementation of the Digital Single Market Strategy—a connected Digital Single Market for all, [COM(2017) 228 final], Brussels, 10 May 2017

    Google Scholar 

  • European Commission (2018a) Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of Regions—Artificial Intelligence for Europe, Brussels, 25 April 2018, COM(2018) 237 final

    Google Scholar 

  • European Commission (2018b) Communication from the Commission – Artificial intelligence for Europe, COM/2018/237 final, Brussels, 25 April 2018

    Google Scholar 

  • European Commission (2019) Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions – Building Trust in Human Centric Artificial Intelligence, COM(2019)168, Brussels, 8 April 2019

    Google Scholar 

  • European Commission (2020a) Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the Regions – Shaping Europe’s digital future, COM(2020) 67 final, Brussels

    Google Scholar 

  • European Commission (2020b) White Paper on Artificial Intelligence – a European approach to excellence and trust, COM(2020) 65 final, Brussels, 19 February 2020

    Google Scholar 

  • European Parliament (2017) European Parliament resolution with recommendations to the Commission on Civil Law Rules on Robotics [2015/2103(INL)], 16 February 2017

    Google Scholar 

  • FRA - European Union Agency for Fundamental Rights (2018) #BigData: discrimination in data-supported decision making, FRA Focus

    Google Scholar 

  • FRA - European Union Agency for Fundamental Rights (2020) Getting the future right – Artificial intelligence and fundamental rights, Report

    Google Scholar 

  • Guerra C (2019) Consequências do caso ‘Planet49’ na atividade de marketing, Fórum de proteção de dados, No. 6. Comissão Nacional de Proteção de Dados, Lisbon

    Google Scholar 

  • Harari Y (2017) Homo Deus. História breve do amanhã. Elsinore, Amadora

    Book  Google Scholar 

  • HLEG AI (2019a) A definition of AI: main capabilities and scientific disciplines. European Commission, Brussels

    Google Scholar 

  • HLEG AI (2019b) Ethics guidelines for trustworthy AI. European Commission, Brussels

    Google Scholar 

  • Kaku M (2018) O futuro da humanidade. Editorial Bizâncio, Lisbon

    Google Scholar 

  • Mendoza I, Bygrave L (2017) The right not to be subject to automated decisions based on profiling, SSRN [Preprint]

    Google Scholar 

  • Oliveira A (2019) Inteligência artificial. Fundação Francisco Manuel dos Santos, Lisbon

    Google Scholar 

  • Silveira A (2023) Finally, the ECJ is interpreting Article 22 GDPR (on individual decisions based solely on automated processing, including profiling). The official blog of UNIO (https://officialblogofunio.com/)

  • Turing A (1950) Computer machinery and intelligence. Mind Q Rev Psychol Philos 59:236

    Google Scholar 

  • Vilaça JC, Silveira A (2017) The European federalisation process and the dynamics of fundamental rights. In: Kochenov D (ed) EU citizenship and federalism – the role of rights. Cambridge University Press

    Google Scholar 

  • Wachter S, Mittelstadt B (2019) A right to reasonable inferences: re-thinking data protection law in the age of big data and AI. Colum Bus Law Rev 2:494

    Google Scholar 

  • Wachter S et al (2017) Why a right to explanation of automated decision-making does not exist in the general data protection regulation. Int Data Priv Law 7(2):76

    Article  Google Scholar 

  • Wachter S et al (2020) Why fairness cannot be automated: bridging the gap between EU non-discrimination law and AI, SSRN [Preprint]

    Google Scholar 

  • Warren S, Brandeis L (1890) The right to privacy. Harv Law Rev 193:5

    Google Scholar 

  • Zuboff S (2020) A era do capitalismo da vigilância – a disputa por um futuro humano na nova fronteira do poder. Relógio d’Água, Lisbon

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alessandra Silveira .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Silveira, A. (2024). Profiling and Cybersecurity: A Perspective from Fundamental Rights’ Protection in the EU. In: Carneiro Pacheco de Andrade, F.A., Fernandes Freitas, P.M., de Sousa Covelo de Abreu, J.R. (eds) Legal Developments on Cybersecurity and Related Fields. Law, Governance and Technology Series, vol 60. Springer, Cham. https://doi.org/10.1007/978-3-031-41820-4_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-41820-4_15

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-41819-8

  • Online ISBN: 978-3-031-41820-4

  • eBook Packages: Law and CriminologyLaw and Criminology (R0)

Publish with us

Policies and ethics