1 Introduction

Since the World Wide Web first appeared in 1989, regulation around the internet and the digital space more broadly has struggled to catch up with the speed of technological progress, in part due to the narrative that regulation might stifle innovation. But the societal and individual human rights impacts of technology in our lives are increasing on a daily basis and there is an urgent need for States to deliver on their positive obligations to protect human rights through effective law and regulation. Privacy and data protection have been at the frontline of attempts by national and international bodies to regulate the way in which technology interacts with us at a personal level. And the right to freedom of expression along with the prohibition on hate speech have formed another battleground between libertarian ideals that claim deregulation is the answer to human freedom on the internet, and those who insist that protecting human rights, both on and offline, entails protecting us from the actions of others, whether individuals or companies, online. But as technology and artificial intelligence advance, new human rights questions are coming into play which open novel challenges as well as new perspectives and opportunities for the future regulation of the digital world. This article will focus on the emerging impact of technology on the rights to freedom of thought and opinion in the forum internum.

One of the reasons why data protection is so important is that the granular nature of the data available on us does not only reveal who we are and where we live, it reveals how we think. And the debates around hate speech and content regulation on the internet are not fundamentally about the content, they are about the way that information and the way it is served to us, have a direct effect on our opinions and emotions. Privacy, data protection and expression are the gateways to our minds, and, in the digital age, they have so far been serving as the gatekeepers for our rights to freedom of thought and opinion. But as we consider our future relationship with tech regulation and innovation, there is an opportunity to look beyond these rights to consider the most effective ways to protect the rights at the heart of the digital revolution – our rights to freedom of thought and opinion in the privacy of our own minds.

2 International law framework

Privacy and freedom of expression are limited rights in both EuropeanFootnote 1 and international lawFootnote 2 and the discussions around protecting them inevitably involve questions of balance and proportionality. We can limit privacy to protect health, for example, but in doing so, we must ensure that any limitation is in accordance with the law, justified, necessary in a democratic society, proportionate and non-discriminatory. Similarly, expression is not protected when it is used to destroy the rights of others and limitation on the right to freedom of expression may be allowed if it can pass all of those tests. International human rights law makes a distinction between the internal aspect of the right to freedom of thought – the right to think or believe what you like in the inner sanctum of your mind – and the manifestation of the right. What stays inside your head is protected absolutely, while your expression of thoughts, opinions, religion, or belief can be limited by law when necessary in a democratic society for a legitimate aim. But the right to freedom of thought, conscience and belief and the right to freedom of opinion, are protected absolutely, at least insofar as they stay inside our heads. This means that there can never be a justification for violating those rights, there are no legitimate aims to justify interfering with them, and an interference with the right to freedom of thought will not be cured by asserting its legality in domestic law. We are familiar with the arguments around regulating speech and religious freedom, but little attention has been paid to the scope of the rights to freedom of thought and opinion in the “forum internum”. This inviolable freedom has been described as “the foundation of democratic society”Footnote 3 and “the basis and origin of all other rights.”Footnote 4 It matters because, without the freedom to think for ourselves and to form and hold opinions we lose our ability to innovate, to decide on our futures through democratic political engagement and, ultimately, what Apple CEO Tim Cook has described as “the freedom to be human.”Footnote 5 Neither the courts nor policy makers have yet really grappled with the relevance and meaning of the rights to freedom of thought and opinion or the steps we can take to protect them. But now, more than ever, we need to address ourselves to this fundamental challenge. If we lose our freedom to think for ourselves, we may never get it back.

So what does the right to freedom of thought and the related right to freedom of opinion in the “forum internum” entail? The right to freedom of thought, along with the closely related right to freedom of opinion, has been a fundamental right since the age of Enlightenment and it is enshrined in various international human rights instruments including the UDHR, the ICCPR, the ECHR and the EU Charter. It protects all aspects of our inner lives, whether profound or trivial including emotional states and political opinions.Footnote 6 The UN Human Rights Committee confirmed, in its General Comment 22 on Article 18 of the ICCPR, that the right freedom of thought is absolute and non-derogable insofar as it protects the “forum internum”.Footnote 7 The absolute nature of the protection in law means that, unlike privacy, data protection, freedom of expression or many other rights, there can never be a justification for interfering with the right to freedom of thought. The question is not, therefore, what is the basis for legitimate interference with the right, but rather, what is the effective scope of the right to absolute protection in the “forum internum”?

The rights to freedom of thought and freedom of opinionFootnote 8 include three elements:

  • the right to keep your thoughts and opinions private;

  • the right not to have your thoughts and opinions manipulated; and

  • the right not to be penalised for your thoughts and opinions.Footnote 9

The EU Charter of Fundamental Rights has several provisions that protect the internal aspects of freedom of thought in terms of both dignity and freedom. These include the right to mental integrity,Footnote 10 the right to freedom of thought, conscience and religionFootnote 11 and the freedom to hold opinions.Footnote 12 This means that the rights are a part of EU Treaty law and that EU law and policy should reflect that. So far, however, little has been done to develop fully operational legislative and regulatory frameworks to ensure that enjoyment of the right to freedom of thought is real and effective in a modern context.

In practical terms, State signatories to international human rights conventions are bound to respect these rights, and in many cases, they are reflected in domestic human rights laws and constitutions. But there are also positive obligations to protect all those in their jurisdiction from interference with the right. So, laws must be in place, not only to prevent state actions that could interfere with our rights to freedom in the “forum internum,” but also prohibiting others from doing so. Effective protection for the rights to freedom of thought and opinion in the 21st Century must be sufficient to address threats from public and private sector activities as well as security threats from bad actors.

Protecting the right to freedom of thought is not about directing thoughts in the direction we think is best, it is about recognising and prohibiting the kind of practices and techniques which threaten to undermine the right, no matter who is using them. In 2019, the Council of Europe’s Committee of Ministers issued a Declaration on the Manipulative Capabilities of Algorithmic Processes which recognised that “[f]ine grained, sub-conscious and personalised levels of algorithmic persuasion may have significant effects on the cognitive autonomy of individuals and their right to form opinions and take independent decisions.”Footnote 13 In the same declaration, the Council of Ministers recognised that this could “lead to the corrosion of the very foundations of the Council of Europe”—human rights and democracy. The UN Secretary General’s High-Level Panel on Digital Cooperation noted that “[w]e are delegating more and more decisions to intelligent systems, from how to get to work to what to eat for dinner. This can improve our lives, by freeing up time for activities we find more important. But it is also forcing us to rethink our understandings of human dignity and agency, as algorithms are increasingly sophisticated at manipulating our choices – for example, to keep our attention glued to a screen.”Footnote 14 UN Special Rapporteurs including those on freedom of opinion and expression, human rights and counter-terrorism and extreme poverty have also flagged up the risks of AI and other technological developments for the “forum internum” in recent years. It is now time for these warnings to prompt real and effective regulation and legislation to identify problem areas and take action to protect our right to think for ourselves while we still can.

3 Opportunities for regulation in Europe

Freedom of thought was an idea at the heart of the European Enlightenment. And Europe has been at the vanguard of regulation for data protection and privacy in the digital sphere with the General Data Protection Regulation translating the right to protection of personal data and the right to private life contained in the EU Charter on Fundamental Rights and Freedoms into a practical regulatory tool with global reach. But focusing on the data misses the heart of the problem we face with the onward march of what Shoshana Zuboff describes as “surveillance capitalism.”Footnote 15 Zuboff has described the way the technology industry has evolved to become a market in “human futures” with our data providing insights into how we think and what we do. But as she explains “[u]ltimately, it has become clear that the most predictive data comes from intervening in our lives to tune and herd our behaviour towards the most profitable outcomes. Data scientists describe this as a shift from monitoring to actuation. The idea is not only to know our behaviour but also to shape it in ways that can turn predictions into guarantees. It is no longer enough to automate information flows about us; the goal now is to automate us.”Footnote 16 As technology and data protection regulation develop side by side, technologists and machines will learn to step away from the data, finding new ways to achieve the same results, ultimately moving towards personalisation without the personal data. If the European Union wants to future proof human rights in the technological age, it will need to place the rights to freedom of thought and opinion, not just data, at the heart of its digital strategy.

In 2020, the European Commission held two wide-ranging consultations on its digital future, the consultation on the White Paper on Artificial IntelligenceFootnote 17 and the consultation on the Digital Services Act package.Footnote 18 While neither of these consultations explicitly addressed the rights to freedom of thought or opinion in any detail, they do offer a chance for the European Union to reflect on new ideas and perspectives for the next era of European regulation in the digital space.

The White Paper focuses on building an “ecosystem of excellence” and an “ecosystem of trust” based on European values and the rule of law. The risks to fundamental rights are enumerated, but the list of potential rights implications misses the essential problem of risk to the right to freedom of thought.

The General Data Protection Regulation (GDPR)Footnote 19 mentions the right to freedom of thought in its preamble.Footnote 20 And Article 6 on the Lawfulness of Processing provides that Processing necessary for the purposes of the legitimate interests pursued by the controller or by a third party will not be lawful “where such interests are overridden by the interests or fundamental rights and freedoms of the data subject which require protection of personal data, in particular where the data subject is a child.”Footnote 21 So in principle, processing will not be lawful under the GDPR where it interferes with the rights to freedom of thought, mental integrity or freedom of opinion. But the GDPR is not explicit about prohibiting the types of practice that could interfere with the right to freedom of thought, notably personality profiling or AI that draws emotional inferences based on personal data.

Data and AI are increasingly used to get inside people’s minds to make inferences about how people are thinking and feeling, or to influence their thoughts and emotions to produce particular behaviours in a range of contexts such as consumers, citizens, suspects, patients, or pupils. There is an urgent need for the EU to reflect on the gaps in the current framework to ensure that its ecosystem of trust provides robust protections for our inner lives. EU law must be interpreted in light of the Charter, but there is also a need for more explicit protections in the context of AI. So far, the strength and importance of these rights as they apply to both AI and the use of data in general, have not been reflected in specific legal frameworks. Creating such frameworks requires a careful consideration of the ways in which the right to freedom of thought could be interfered with by AI and other digital services so as to work out the most effective ways to protect it.

AI and the use of big data pose a risk to all three elements of the right to freedom of thought. Data is increasingly used to infer emotional or mental states. It is also used to nudge or influence individuals’ mental states to change behaviours. Inferences are drawn about inner states to predict and penalise people for potential future behaviours. These practices are intrinsic to the current consumer data driven business model of AI described in detail by Shoshana Zuboff.Footnote 22 But they are not yet explicitly prohibited in EU law although, arguably, a reading of EU law in the light of the Charter would prohibit any practices that interfere with the right to freedom of thought.

The EU White Paper on AIFootnote 23 includes an example of discrimination in the use of AI to predict recidivism, but the issue here is not only the discriminatory impact – if the AI is being used to draw inferences about an individual’s inner state to predict their future (as opposed to analysing their past behaviour) with consequences for the way they are treated in the criminal justice sector, this is likely to be a breach of the right to freedom of thought and the prohibition on punishing a person based on their thoughts. The discriminatory impact exacerbates the breach of the right, but it is not the only issue.

In 2017, researchers at Stanford University published research that claimed they could tell a person’s sexual orientation from their photograph.Footnote 24 This research sparked a serious backlash questioning the ethics and validity of the study, but the researchers said that they had done the research to open up the debate on the dangerous capabilities of AI. Regardless of discussions around the validity and ethics of the research, it does highlight the need to create robust regulatory frameworks to prevent the use of AI to infer things about our inner lives from our data including our biometric data. And it raises a whole series of concerns about the potential for facial recognition technology to be harnessed as a tool to interfere with our rights to keep our thoughts and feelings private, not just to identify who we are and monitor where we go. This example is extreme, but AI designed to profile individual emotions or personalities for other reasons is equally problematic from the perspective of freedom of thought. While there may be circumstances in which the rights to privacy or data protection may be limited, this is not the case with the absolute right to freedom of thought. If a practice breaches the right, it should never be allowed for any reason.

Behavioural micro-targeting is another example of practices that may interfere with the right to freedom of thought. This uses personality profiling to tailor messages and manipulate thoughts, emotions, and behaviours. In the political context, the Cambridge Analytica scandal showed that this type of use of AI is also a threat to democracy. But despite the public outrage, political behavioural micro-targeting is not prohibited in most EU countries and many “Neuropolitics” consultancy firms offering services that claim to access the emotional responses of the public and voters are based in the EU.Footnote 25 In March 2019, the Spanish “Defensor del Pueblo”, the ombudsman body tasked with protecting the rights guaranteed in the Spanish Constitution, challenged the constitutionality of Spanish laws that gave broad permission to political parties to collect and use voter data in their campaigning activities. He argued that, far from a tool for democracy, the broad powers without the usual limitations of data protection law amounted to a breach of several Constitutional rights including the right to protection of personal data, the right to private life, the right to “ideological freedom” (the Spanish constitutional equivalent of freedom of thought and freedom of opinion) and the right to political participation. Explaining his reasons for bringing the case, he commented that, “the possibility, or even the certainty, as shown by the recent case of Cambridge Analytica, that big data techniques can be used to modulate, or even manipulate, political opinions demonstrates the need for normative guarantees and legal restraints to be appropriate, precise and effective in relation to the collection and processing of personal data related to political opinions by political parties in the context of their electoral activities.”Footnote 26

The Constitutional Court agreed with him and reached a unanimous decisionFootnote 27 in a record 2 months declaring the new provisions unconstitutional and putting an end to the wide-ranging powers over personal data in Spanish electoral campaigns. While the court recognised that personal data on political opinions may be collected and processed in the context of electoral activities according to EU law, it stressed that this was only permissible where it was in the public interest and the law provided sufficient safeguards. In its judgment, it noted that the right to protection of personal data was important both as a standalone right and as a right that guarantees the effective enjoyment of other constitutional rights such as the right to ideological freedom. This judicial recognition of the interplay between data protection and the right to freedom of thought and opinion highlights the legal importance of data protection not only in relation to privacy, but also for our absolute right to freedom from manipulation of our thoughts and opinions.

Freedom of thought is fundamental to the very idea of democracy. But the techniques used in the political sphere are, to a large extent, mirrored in commercial marketing practices. Protecting our freedom of thought requires a review of our relationship with marketing and advertising more broadly, not only in the political sphere. There is a need for careful consideration of the permissible boundaries for AI processing of personal data to protect the right to freedom of thought both in the commercial and in the political sphere.

The White paper notes that:

“Europe’s current and future sustainable economic growth and societal wellbeing increasingly draws on value created by data. AI is one of the most important applications of the data economy. Today most data are related to consumers and are stored and processed on central cloud-based infrastructure. By contrast a large share of tomorrow’s far more abundant data will come from industry, business and the public sector, and will be stored on a variety of systems, notably on computing devices working at the edge of the network. This opens up new opportunities for Europe, which has a strong position in digitised industry and business-to-business applications, but a relatively weak position in consumer platforms.”Footnote 28 But data from industry, business and the public sector, when used to make decisions that affect real people, are far from problematic.

Countries around the world, including in the EU, are increasingly reliant on AI to make data-based decisions on things like access to social welfare. But the human rights implications of these developments have not been seriously considered before deployment. As UN Special Rapporteur on Extreme Poverty, Philip Alston put it, we are at risk of “stumbling, zombie-like into a digital welfare dystopia.”Footnote 29 Social welfare is expensive and welfare fraud is a problem that Governments everywhere struggle with. To address this, the Netherlands introduced an automated system known as the System Risk Indication (SyRI for short) to identify people most likely to commit benefit fraud based on the full range of data points available to the State. But following a legal challenge by a group of non-governmental organisations (NGO’s), the District Court of the Hague ruled the legislation that governed SyRI violated the higher law of international human rights.Footnote 30 The inability of people to know whether or not they had been profiled or identified as risky along with the absolute lack of transparency about the algorithm and data used meant that the legislation was unlawful in breach of the right to private life. This case was argued primarily on the basis of the right to private life, a limited right, so there was a balancing act to be done between the need for public order and the rights of the individuals. The right to a personal identity and the right to personal development are elements of the right to private life, but where they refer to things that go on inside our heads, our thoughts, personality and inclinations, they may equally be considered as aspects of the rights to freedom of thought or opinion.

In considering whether there has been a breach of the right to freedom of thought or opinion, the question is whether the facts show an interference with the right. If there is an interference, there is no balancing exercise to be done, the practice is a breach of human rights. The Courts are more comfortable with arguments around privacy and data protection because they are now more familiar, but as technology and AI evolve, considerations of the legal arguments around the right to freedom of thought need to develop to meet new challenges that privacy and data protection may not be sufficient to address.Footnote 31 Legal controls and regulations that focus on freedom of thought may be clearer and more radical than those that focus on data protection and privacy because they will create a clearer picture of what may never be permissible in human rights terms.

4 The Precautionary Principle

There is increasing (though sometimes contradictory) evidence of the impact technology has both on mental health, but also on the actual shape and capacity of the human brain.Footnote 32 Product safety and liability are relevant to health outcomes and the long term effects of technology and may also be considered from the perspective of fundamental rights including the right to freedom of thought and the right to mental integrity. Assessing and preventing risks to safety and the potential for human rights abuses should be a basic plank of regulation in scientific and technological development.

UNESCO, along with its advisory body, the World Commission on the Ethics of Scientific Knowledge and Technology, developed a working definition of the “precautionary principle” that is found in many international instruments relating to scientific developments and in the environmental field:

“When human activities may lead to morally unacceptable harm that is scientifically plausible but uncertain, actions shall be taken to avoid or diminish that harm.

Morally unacceptable harm refers to harm to humans or the environment that is

  • threatening to human life or health, or

  • serious and effectively irreversible, or

  • inequitable to present or future generations, or

  • imposed without adequate consideration of the human rights of those affected.

The judgement of plausibility should be grounded in scientific analysis. Analysis should be ongoing so that chosen actions are subject to review.

Uncertainty may apply to, but need not be limited to, causality or the bounds of the possible harm.

Actions are interventions that are undertaken before harm occurs that seek to avoid or diminish the harm. Actions should be chosen that are proportional to the seriousness of the potential harm, with consideration of their positive and negative consequences, and with an assessment of the moral implications of both action and inaction. The choice of action should be the result of a participatory process.”Footnote 33

Technological developments that have the capacity to interfere with our freedom of thought fall clearly within the scope of “morally unacceptable harm”. The precautionary principle requires that action is taken before the harm occurs where the risk of harm is real, even if it is not certain. It is a familiar part of EU law in the environmental context.Footnote 34 It is now time to ensure that the precautionary principle is rigorously applied to ongoing developments in the technology field as well. The European Union’s General Data Protection Regulation brought in new standards in an attempt to future-proof the right to data protection in light of fast-changing technological advances. The preamble recognises that:

  • Building on data-protection principles

“The processing of personal data should be designed to serve mankind. The right to the protection of personal data is not an absolute right; it must be considered in relation to its function in society and be balanced against other fundamental rights, in accordance with the principle of proportionality. This Regulation respects all fundamental rights and observes the freedoms and principles recognised in the Charter as enshrined in the Treaties, in particular the respect for private and family life, home and communications, the protection of personal data, freedom of thought, conscience and religion, freedom of expression and information, freedom to conduct a business, the right to an effective remedy and to a fair trial, and cultural, religious and linguistic diversity.”Footnote 35

The substantive rights of the GDPR include the right not to be subjected to automated individual decision making and data profiling.Footnote 36 But data protection and privacy can be limited. Applying the precautionary principle to the potential impact of technology on the human mind through the lens of the right to freedom of thought could provide useful guidance in terms of clear directions for innovation and technological development and a map that indicates areas that will not be legitimate for development. The EU White Paper noted different Member State approaches to future risks of AI pointing to the call by the German Data Ethics Commission for a “five-level risk-based system of regulation that would go from no regulation for the most innocuous AI systems to a complete ban for the most dangerous ones.”Footnote 37 This could provide a template for regulation that protects the rights to freedom of thought and opinion and the right to mental integrity. The most dangerous AI systems would include those that interfere with the rights to freedom of thought or opinion regardless of the sphere they operate in. The European Union should not be afraid to close down directions for research and development that risk destroying the freedom of thought that is needed for future innovation. The Precautionary Principle needs to be at the heart of European regulation of the digital space.

5 Securing diversity of thought and innovation for the future

In a 2019 address to Stanford students Apple’s CEO Tim CookFootnote 38 spoke of the chilling effect of digital surveillance warning that the ‘small, unimaginative world we would end up with’ if we continue on our current global course is “the kind of environment that would have stopped Silicon Valley before it had even gotten started.” He concluded:

“If we believe that freedom means an environment where great ideas can take root, where they can grow and be nurtured without fear of irrational restrictions or burdens, then it’s our duty to change course, because your generation ought to have the same freedom to shape the future as the generation that came before.”

But the world’s digital future need not start in Silicon Valley. And Europe could be at the vanguard of moving the business model around data and AI away from the “surveillance capitalism” model that feeds off consumer data and the interpretation and exploitation of our inner states. By promoting the development of European AI frameworks away from the exploitation of consumer data, Europe could open a new direction in innovation in AI and data driven technology that promotes and protects the European values of freedom of thought and opinion. The surveillance capitalism model risks crushing those freedoms for future generations.

Addressing the risks to the rights to freedom of thought and opinion in the context of big data and artificial intelligence requires a cross-sectoral approach which puts human agency and autonomy at the heart of European policy and regulation in this sphere. Privacy is certainly a pre-requisite for effective enjoyment of the right to freedom of thought, but it is a gateway right and there is an urgent need to protect the right itself as technologists develop backdoors to our minds that bypass privacy regulations with sleight of hand.

The Council of Europe Council of Ministers in its Declaration on the manipulative capabilities of algorithmic processes clearly detailed the problem that regulation needs to address as it drew attention “to the growing threat to the right of human beings to form opinions and take decisions independently of automated systems, which emanates from advanced digital technologies. Attention should be paid particularly to their capacity to use personal and non-personal data to sort and micro-target people, to identify individual vulnerabilities and exploit accurate predictive knowledge, and to reconfigure social environments in order to meet specific goals and vested interests;”

and encouraged member States “to assume their responsibility to address this threat by

a) ensuring that adequate priority attention is paid at senior level to this inter-disciplinary concern that often falls in between established mandates of relevant authorities;

b) considering the need for additional protective frameworks related to data that go beyond current notions of personal data protection and privacy and address the significant impacts of the targeted use of data on societies and on the exercise of human rights more broadly;

c) initiating, within appropriate institutional frameworks, open-ended, informed and inclusive public debates with a view to providing guidance on where to draw the line between forms of permissible persuasion and unacceptable manipulation. The latter may take the form of influence that is subliminal, exploits existing vulnerabilities or cognitive biases, and/or encroaches on the independence and authenticity of individual decision-making;

d) taking appropriate and proportionate measures to ensure that effective legal guarantees are in place against such forms of illegitimate interference; and

e) empowering users by promoting critical digital literacy skills and robustly enhancing public awareness of how many data are generated and processed by personal devices, networks, and platforms through algorithmic processes that are trained for data exploitation. Specifically, public awareness should be enhanced of the fact that algorithmic tools are widely used for commercial purposes and, increasingly, for political reasons, as well as for ambitions of anti- or undemocratic power gain, warfare, or to inflict direct harm;”Footnote 39

This list provides a useful starting point for considering a regulatory framework that addresses the protection of the rights to freedom in the “forum internum” specifically. The European Union, with its binding legislative capacity should take note as it moves into the next stage of digital and technological regulation. It could be the driver of rights protection in its Member States, but at a minimum, in line with the legal obligations of the EU Charter, it should certainly not try to introduce frameworks that might inhibit Member States from fulfilling their positive obligations to protect the rights of freedom of thought and opinion within their borders, now and for the future.

Protecting the rights to freedom of thought and freedom of opinion in the sanctity of our minds is a matter of urgency if we are serious about a human future of innovation and diversity of thought. This is related to, but not the same as protecting privacy and the freedom to express our opinions and to receive and impart information. Placing the absolute rights to freedom of thought and opinion at the heart of regulation provides a different perspective on what may be and what will never be acceptable in human rights terms. It requires the potential to prohibit certain practices because of their impact on our thought processes, regardless of the field they operate in and a different view on risk and harm. The ability to infer and manipulate the thoughts and opinions of individuals has implications not only for individual rights, but for the foundation of democratic societies.

The European Union consultations on AI and on digital services offer an opportunity for the EU to focus on this area in the next wave of European regulation. But in the absence of leadership on this topic from the EU, States and other regional and international organisations can begin to develop their own legislative and regulatory protections for the rights to freedom of thought and opinion. Effective regulation in this area will need to consider the scope of the rights. The courts will no doubt begin to consider the fundamental questions related to these rights, but in the absence of clear jurisprudence, there are several areas that policy makers will need to grapple with.

Firstly, there is a need to clarify the scope of the right. Professor Martin Scheinin, in his commentary on the Universal Declaration on Human Rights suggests that freedom of thought, conscience and religion taken together cover all possible attitudes towards the world and society protecting the “absolute character of the freedom of an inner state of mind.” The UN Human Rights Committee has described the scope of the right as “far-reaching and profound.”Footnote 40 And the European Commission of Human Rights has found that, given the “comprehensiveness of the concept of thought,” a parent’s wish to name their child in a certain way would come within the scope of the right to freedom of thought.Footnote 41 This would indicate a broad scope of protection for all kinds of thought, whether trivial or profound and this should be reflected in the digital context. Secondly, lines need to be drawn between the internal and external aspects of the right. In the digital world, at what point do our thoughts become expression? The Research Report on Artistic Freedom of the UN Special Rapporteur on the promotion and promotion and protection of the freedom of opinion and expressionFootnote 42 noted that “artistic work product – the work that precedes any kind of dissemination or distribution, the creations that a person is still working through, the private thinking and creation before one imparts to others – should be considered to constitute protected opinion not subject to interference.” The line between opinion and expression may, therefore, be considered in relation to our decision to share what we are thinking, even if we may have made private notes or left traces that reveal our inner thoughts. Thirdly, the line mentioned by the Council of Ministers of the Council of EuropeFootnote 43 between permissible persuasion and unacceptable manipulation is crucial to regulating to protect freedom of thought and opinion. Many countries around the world, and the European Union, were quick to ban subliminal advertising on television when it was first raised as a possibility, regardless of whether or not it was effective. Yet in the digital sphere, there has been very little attempt to prohibit the use of subliminal psychological techniques whether in political advertising or in any other context. This is an issue that needs to be addressed as a matter of urgency.

Regulating from the perspective of the right to freedom of thought is new and complex, but our future as autonomous humans living in democratic societies founded on human rights depends upon it and there is no more time to lose.