Freedom of thought was an idea at the heart of the European Enlightenment. And Europe has been at the vanguard of regulation for data protection and privacy in the digital sphere with the General Data Protection Regulation translating the right to protection of personal data and the right to private life contained in the EU Charter on Fundamental Rights and Freedoms into a practical regulatory tool with global reach. But focusing on the data misses the heart of the problem we face with the onward march of what Shoshana Zuboff describes as “surveillance capitalism.”Footnote 15 Zuboff has described the way the technology industry has evolved to become a market in “human futures” with our data providing insights into how we think and what we do. But as she explains “[u]ltimately, it has become clear that the most predictive data comes from intervening in our lives to tune and herd our behaviour towards the most profitable outcomes. Data scientists describe this as a shift from monitoring to actuation. The idea is not only to know our behaviour but also to shape it in ways that can turn predictions into guarantees. It is no longer enough to automate information flows about us; the goal now is to automate us.”Footnote 16 As technology and data protection regulation develop side by side, technologists and machines will learn to step away from the data, finding new ways to achieve the same results, ultimately moving towards personalisation without the personal data. If the European Union wants to future proof human rights in the technological age, it will need to place the rights to freedom of thought and opinion, not just data, at the heart of its digital strategy.
In 2020, the European Commission held two wide-ranging consultations on its digital future, the consultation on the White Paper on Artificial IntelligenceFootnote 17 and the consultation on the Digital Services Act package.Footnote 18 While neither of these consultations explicitly addressed the rights to freedom of thought or opinion in any detail, they do offer a chance for the European Union to reflect on new ideas and perspectives for the next era of European regulation in the digital space.
The White Paper focuses on building an “ecosystem of excellence” and an “ecosystem of trust” based on European values and the rule of law. The risks to fundamental rights are enumerated, but the list of potential rights implications misses the essential problem of risk to the right to freedom of thought.
The General Data Protection Regulation (GDPR)Footnote 19 mentions the right to freedom of thought in its preamble.Footnote 20 And Article 6 on the Lawfulness of Processing provides that Processing necessary for the purposes of the legitimate interests pursued by the controller or by a third party will not be lawful “where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.”Footnote 21 So in principle, processing will not be lawful under the GDPR where it interferes with the rights to freedom of thought, mental integrity or freedom of opinion. But the GDPR is not explicit about prohibiting the types of practice that could interfere with the right to freedom of thought, notably personality profiling or AI that draws emotional inferences based on personal data.
Data and AI are increasingly used to get inside people’s minds to make inferences about how people are thinking and feeling, or to influence their thoughts and emotions to produce particular behaviours in a range of contexts such as consumers, citizens, suspects, patients, or pupils. There is an urgent need for the EU to reflect on the gaps in the current framework to ensure that its ecosystem of trust provides robust protections for our inner lives. EU law must be interpreted in light of the Charter, but there is also a need for more explicit protections in the context of AI. So far, the strength and importance of these rights as they apply to both AI and the use of data in general, have not been reflected in specific legal frameworks. Creating such frameworks requires a careful consideration of the ways in which the right to freedom of thought could be interfered with by AI and other digital services so as to work out the most effective ways to protect it.
AI and the use of big data pose a risk to all three elements of the right to freedom of thought. Data is increasingly used to infer emotional or mental states. It is also used to nudge or influence individuals’ mental states to change behaviours. Inferences are drawn about inner states to predict and penalise people for potential future behaviours. These practices are intrinsic to the current consumer data driven business model of AI described in detail by Shoshana Zuboff.Footnote 22 But they are not yet explicitly prohibited in EU law although, arguably, a reading of EU law in the light of the Charter would prohibit any practices that interfere with the right to freedom of thought.
The EU White Paper on AIFootnote 23 includes an example of discrimination in the use of AI to predict recidivism, but the issue here is not only the discriminatory impact – if the AI is being used to draw inferences about an individual’s inner state to predict their future (as opposed to analysing their past behaviour) with consequences for the way they are treated in the criminal justice sector, this is likely to be a breach of the right to freedom of thought and the prohibition on punishing a person based on their thoughts. The discriminatory impact exacerbates the breach of the right, but it is not the only issue.
In 2017, researchers at Stanford University published research that claimed they could tell a person’s sexual orientation from their photograph.Footnote 24 This research sparked a serious backlash questioning the ethics and validity of the study, but the researchers said that they had done the research to open up the debate on the dangerous capabilities of AI. Regardless of discussions around the validity and ethics of the research, it does highlight the need to create robust regulatory frameworks to prevent the use of AI to infer things about our inner lives from our data including our biometric data. And it raises a whole series of concerns about the potential for facial recognition technology to be harnessed as a tool to interfere with our rights to keep our thoughts and feelings private, not just to identify who we are and monitor where we go. This example is extreme, but AI designed to profile individual emotions or personalities for other reasons is equally problematic from the perspective of freedom of thought. While there may be circumstances in which the rights to privacy or data protection may be limited, this is not the case with the absolute right to freedom of thought. If a practice breaches the right, it should never be allowed for any reason.
Behavioural micro-targeting is another example of practices that may interfere with the right to freedom of thought. This uses personality profiling to tailor messages and manipulate thoughts, emotions, and behaviours. In the political context, the Cambridge Analytica scandal showed that this type of use of AI is also a threat to democracy. But despite the public outrage, political behavioural micro-targeting is not prohibited in most EU countries and many “Neuropolitics” consultancy firms offering services that claim to access the emotional responses of the public and voters are based in the EU.Footnote 25 In March 2019, the Spanish “Defensor del Pueblo”, the ombudsman body tasked with protecting the rights guaranteed in the Spanish Constitution, challenged the constitutionality of Spanish laws that gave broad permission to political parties to collect and use voter data in their campaigning activities. He argued that, far from a tool for democracy, the broad powers without the usual limitations of data protection law amounted to a breach of several Constitutional rights including the right to protection of personal data, the right to private life, the right to “ideological freedom” (the Spanish constitutional equivalent of freedom of thought and freedom of opinion) and the right to political participation. Explaining his reasons for bringing the case, he commented that, “the possibility, or even the certainty, as shown by the recent case of Cambridge Analytica, that big data techniques can be used to modulate, or even manipulate, political opinions demonstrates the need for normative guarantees and legal restraints to be appropriate, precise and effective in relation to the collection and processing of personal data related to political opinions by political parties in the context of their electoral activities.”Footnote 26
The Constitutional Court agreed with him and reached a unanimous decisionFootnote 27 in a record 2 months declaring the new provisions unconstitutional and putting an end to the wide-ranging powers over personal data in Spanish electoral campaigns. While the court recognised that personal data on political opinions may be collected and processed in the context of electoral activities according to EU law, it stressed that this was only permissible where it was in the public interest and the law provided sufficient safeguards. In its judgment, it noted that the right to protection of personal data was important both as a standalone right and as a right that guarantees the effective enjoyment of other constitutional rights such as the right to ideological freedom. This judicial recognition of the interplay between data protection and the right to freedom of thought and opinion highlights the legal importance of data protection not only in relation to privacy, but also for our absolute right to freedom from manipulation of our thoughts and opinions.
Freedom of thought is fundamental to the very idea of democracy. But the techniques used in the political sphere are, to a large extent, mirrored in commercial marketing practices. Protecting our freedom of thought requires a review of our relationship with marketing and advertising more broadly, not only in the political sphere. There is a need for careful consideration of the permissible boundaries for AI processing of personal data to protect the right to freedom of thought both in the commercial and in the political sphere.
The White paper notes that:
“Europe’s current and future sustainable economic growth and societal wellbeing increasingly draws on value created by data. AI is one of the most important applications of the data economy. Today most data are related to consumers and are stored and processed on central cloud-based infrastructure. By contrast a large share of tomorrow’s far more abundant data will come from industry, business and the public sector, and will be stored on a variety of systems, notably on computing devices working at the edge of the network. This opens up new opportunities for Europe, which has a strong position in digitised industry and business-to-business applications, but a relatively weak position in consumer platforms.”Footnote 28 But data from industry, business and the public sector, when used to make decisions that affect real people, are far from problematic.
Countries around the world, including in the EU, are increasingly reliant on AI to make data-based decisions on things like access to social welfare. But the human rights implications of these developments have not been seriously considered before deployment. As UN Special Rapporteur on Extreme Poverty, Philip Alston put it, we are at risk of “stumbling, zombie-like into a digital welfare dystopia.”Footnote 29 Social welfare is expensive and welfare fraud is a problem that Governments everywhere struggle with. To address this, the Netherlands introduced an automated system known as the System Risk Indication (SyRI for short) to identify people most likely to commit benefit fraud based on the full range of data points available to the State. But following a legal challenge by a group of non-governmental organisations (NGO’s), the District Court of the Hague ruled the legislation that governed SyRI violated the higher law of international human rights.Footnote 30 The inability of people to know whether or not they had been profiled or identified as risky along with the absolute lack of transparency about the algorithm and data used meant that the legislation was unlawful in breach of the right to private life. This case was argued primarily on the basis of the right to private life, a limited right, so there was a balancing act to be done between the need for public order and the rights of the individuals. The right to a personal identity and the right to personal development are elements of the right to private life, but where they refer to things that go on inside our heads, our thoughts, personality and inclinations, they may equally be considered as aspects of the rights to freedom of thought or opinion.
In considering whether there has been a breach of the right to freedom of thought or opinion, the question is whether the facts show an interference with the right. If there is an interference, there is no balancing exercise to be done, the practice is a breach of human rights. The Courts are more comfortable with arguments around privacy and data protection because they are now more familiar, but as technology and AI evolve, considerations of the legal arguments around the right to freedom of thought need to develop to meet new challenges that privacy and data protection may not be sufficient to address.Footnote 31 Legal controls and regulations that focus on freedom of thought may be clearer and more radical than those that focus on data protection and privacy because they will create a clearer picture of what may never be permissible in human rights terms.