Against this backdrop, let us turn to the specific rights proposed. For brevity’s sake, they will be considered in light of some criteria for novel human rights that emerged in human rights scholarship in a concededly coarse manner, but this should suffice to show their unsuitability. Since many years, the field has controversially discussed which interests merit access to the pantheon of human rights, given the worry about proliferation (supra). A widely endorsed position is that the list of human rights should be confined to the most important fundamental and universally applicable values. This led to debates about quality criteria for novel human rights. To give a taste, here are a few, put to discussion in an influential paper by Philip Alston (1984).Footnote 18 Suitable candidates should, among others, reflect a fundamentally important value; be relevant throughout a world of diverse value systems; be consistent with, but not merely repetitive of, the existing body of human rights law; be capable of achieving a very high degree of international consensus; and be sufficiently precise as to give rise to identifiable rights and obligations. Let us see how the proposed rights fare in light of these criteria.
The right to personal identity is vague. What is the meaning of identity here? As bioethical debates have shown, the term is ambiguous . It can refer to diachronic identity (conditions under which a person is the same at two different points in time), or to personality (character), to group membership as in identity politics, and more.Footnote 19 It is a bit unclear what a right to personal identity in the first sense may amount to.Footnote 20 A right to one’s personality in the second sense is clearer; it might protect against interferences with one’s personality such as reputation or character. This is a valuable right, and it is, by this name or another, fully or in parts, recognized in current law. Roughly speaking, many continental European legal systems tend to focus on rights to personality, whereas US law focuses on rights to privacy (starting with Warren and Brandeis 1890; [27, 28]. But from different angles, both capture a wide range of interferences with personality . Furthermore, Article 1 of the Oviedo Convention states that parties shall “protect the dignity and identity of all human beings”. How does the proposed right relate to these existing protections of identity? The proposal mentions two aspects. First, technologies should not disrupt “the sense of self”. This is itself a complex term. Does it include, e.g., refugee children seeing their parents drowning, traumatizing them for life? Or is it about a feeling towards oneself, or conditions of phenomenal selfhood? And is the right to personal identity only about this “sense”, or also about other identity related issues?
Second, the Proposal mentions neurotechnologies connecting “individuals with digital networks”, blurring the “line between a person’s consciousness and external technological inputs”. With a nod to the Extended Mind debate , one may wonder whether this is not already happening all the time (smartphones). In any case, the substantive question not addressed is where boundaries between humans and machines should run and who may overstep them – everyone, no one, governments? Does the right prohibit people to blend with digital technologies – and is it then still a right, or rather a duty? These questions show that normative content of the proposed right is largely unclear; it might not refer to an operationalizable right at all, but rather gestures towards an – indeed interesting – new chapter in the history of human-technology interactions . The proposal fails to give rise to identifiable rights and obligations.
The right to free will may be a feast for philosophers, but less for courts.Footnote 21 In general, the concept of free will – whether or not its substrate exists – is debated since centuries. One may wonder which of the innumerous accounts should be relevant for the law; or should the law, as commentators hesitantly ask, develop “a consensual, minimal definition of free will”? . I suggest it should not. Rather, the law should avoid adopting rights to eternally contested concepts. Precision is a virtue of lawmaking.
Regarding substance, it is claimed that persons should have “ultimate control over their own decision making”. This seemingly innocuous remark goes to the heart of the free will debate, in which “ultimate control” is precisely one of the contested notions . The skeptic claim is that people never possess ultimate control, as every decision can either be traced back to a long chain of deterministically caused events, extending beyond the existence of the person,or equally uncontrollable indeterminacy comes in at some point. It would be unfortunate to import this dilemma into the law.
A more fitting characterization of the role of free will in the law is that the former is a presupposition of the latter, at least with respect to (criminal) responsibility. A presuppositional analysis may indeed reveal that legal systems are not fully protecting the factual conditions of responsibility. Therefore, the law may – as the proposal suggests – consider a right against manipulation. But note that the interest against being manipulated is recognized by a range of legal norms revolving around ideas such as undue influence. Still, legal protection against manipulation might be unsystematic and have loopholes. It may thus be worth to systematically analyze and possibly revise legal doctrines about manipulation. Input of the cognitive sciences is of invaluable help to this project, which will soon become complex, given the ubiquitous nature of influence and the difficulties in drawing meaningful distinctions between permissible and impermissible forms (see e.g. Coons and Weber ). Yet, developing such a right does not require another affirmation of the idea that people should not be manipulated, but rather a theory of what this means. Thus, a broad right to free will is almost inherently unclear, its adoption would be unfortunate and unnecessary.
The importance of a right to mental privacy is evident not only to lawyers. But it raises the question why it should be recognized as a standalone right (as the proposal is understood to suggest). Several international instruments protect a general right to privacy or private life. According to the standards of legal interpretation, this abstract right implies more context or domain-specific variations. In other words, mental privacy is implied by the more general right to privacy.Footnote 22 And historically, the seminal paper on the right to privacy noted in 1890: “The common law secures to each individual the right of determining, ordinarily, to what extent his thoughts, sentiments, and emotions shall be communicated to others” (1890: 199). This sounds like privacy of mental states. Proponents of a novel, standalone right to mental privacy need to show why this reading is false; why privacy of the mind is, in principle, different to, say, the privacy of the bedroom. Without this, it seems they are merely different domains of application of a broader idea, the right to be let alone.
I wish to note that sometimes, breaking off a right from a parent right can be useful for doctrinal or symbolic purposes. But this requires compelling reasons. Furthermore, the intriguing questions about mental privacy do not concern its existence, but its scope, strength and limits; e.g., how it fares with respect to legitimate public interests in infringing privacy (law enforcement). Controversial debates about these issues are ongoing; and scholarly input to them, especially regarding neuroimaging methods, would be helpful needed. This might be a more promising field for interdisciplinary policy advise.
The suggested special protection of neurodata merits comment as it was raised at several occasions [34, 35]. Existing, internationally diverging data protection laws stipulate which data might be used under which conditions for which purposes. Calls for novel frameworks must show why and where existing regulations fail, or why neurodata is so special that it should not fall under them. For example, the European General Data Protection Regulation (GDPR) has a special category of sensitive data, including genetic and health data (Article 9). Much data about the brain (“neuro”) or stemming from medical examinations of it (neuroimaging) are covered by this category. As a consequence, processing of such data is prohibited, with enumerated exceptions. Insofar as some forms of neurodata are not covered but should be so, one may insert “neurodata” to Article 9, next to other types of data such as genetic data . No need for further reforms. Surely, the GDPR and other regulations may have shortcomings, but they do not arise from the nature of neurodata, but rather from developments such as big data, or data-driven business models in which consumers voluntarily exchange their data for the use of services. These issues need to be addressed, but within existing frameworks.
Attention should be drawn to ongoing debates at many levels, e.g., the “recommendation on the protection and the use of health-related data” by the UN Special Rapporteur on the Right to Privacy  addresses issues of AI and Big Data. Developed proposals are on the table. The task of the day would be to engage with and possibly improve them, rather than calling for separate frameworks. Finally, the absurdity of the proposal to model a neurodata framework after regulations for organ donation needs to be mentioned. Different ontological natures of bodily organs and data, the multipliability of the latter but not the former, decisively speak against this analogy. The main concern about organ donation is commodification and financial pressure to sell parts of one’s body. This is categorically different to selling neurodata. In fact, a range of legitimate interests in the use of neurodata by private actors are conceivable. Why should, e.g., a company not buy motor cortex data from consumers to optimize their BCI-gaming software?
The right to equal access to mental augmentation is interesting. But is it about providing access, or about equality of access, or both? These options would lead to quite different claims. The explanation calls for “established guidelines at both international and national levels”. Fortunately, such regulations seem to exist. The most prevalent method of mental enhancement, pharmaceuticals, is regulated in large parts by three international treaties, observed by several international agencies, with local offices in every country.Footnote 23 For medical and technological devices, different countries have different regulations (often as medical devices, Sienna ). The demand of the NRI seems fulfilled. The real question is again a different one, namely whether those regulations should be reformed (Maslen et al. ).
Insofar as the NRI’s call implies easier access to mental augmentation tools, it should be noted that the enhancement debate is far from an “international consensus” observant of different value systems. Transculturally, people may have reasonable disagreements about e.g. the significance of nature, the sanctity of the body, the importance of the self. In fact, one may wonder why there should be an international regulatory framework at all. Given the highly technical nature of such regulations and substantive differences in regulatory cultures, e.g.,with respect to liability law, common ground is unlikely to be found. The legitimate diversity of views on the matter also speaks for local experiments rather than unified international standards. The proposed right is thus not suited as an international human right.
Finally, the right to protection from algorithmic bias. Biases are often wrong and should be counteracted. But why a right only against algorithmic biases, what about those of ordinary human psychology? May we not want to trade the latter for algorithmic biases if those are less severe? More generally, artificial intelligence (AI) will raise a range of human rights concerns beyond biases. A right to explanation, to human oversight, against blackboxes, to workplace rights, a right to use AI for everyone, or conversely, a right to live an analogous (non-digital) life. Many topics need to be discussed. Conceivably, AI regulation requires international cooperation – and possibly, new legal instruments – demarcating the boundaries between AI and humans (Special Rapporteur ). But these AI-related problems should be discussed in the proper framework, not as an annex to neuro or biases. This would be an unfortunate framing. A good place might be the recent proposal by the European Commission  for an Artificial Intelligence Act. Rather than adopting the proposed right, lawmakers should foster comprehensive and much broader debates about problems and regulation of AI, and wait with implementation until issues and solutions become clearer.
The proposed rights concededly deserve deeper treatment. But these brief remarks may suffice for a verdict in light of Alston’s criteria: The rights to personal identity and free will are too vague. This is not due to abstract formulations or vagueness at the penumbra, inherent to human rights, but to substantial unclarity about the core of these rights. In some understandings – right to personality, against manipulation – there is not much new under the sun. The same verdict applies to the proposed right to mental privacy: It is repetitive of existing rights and redundant as it is implied in the general right to privacy. The scope of the right to mental augmentation and its relation to the existing regulatory frameworks are equally unsettled. Better reasons speak for diversity in regulation rather than international framework and hence against the adoption of novel fundamental rights. The right to algorithmic biases points to an interesting problem, but much more substantive work about it in the context of AI regulation is required. Apart from mental privacy, none of the rights is sufficiently precise to give rise to identifiable claims in concrete cases. They are neither well-considered individually, nor situated within the existing human rights framework, they repeat some existing rights and contravene others. Their level is either too broad – free will –, or overly specific – algorithmic bias; they are overinclusive – mental augmentation –, or underinclusive, as they leave out a range of potential concerns and further rights (workplace, non-digital life).Footnote 24 In sum, these rights do not convey the impression that they were drafted to be released into the existing landscape of norms, which they affect and with which they interact. The proposal neither refers to, nor engages with existing rights, and that sense it does not seem to take human rights seriously (perhaps because of the faulty premise that current law is silent about these matters).
After all: none of the proposed rights passes quality control according to the Alston criteria. They offer solutions for problems which may not exist in the alleged way; they seek recognition as human rights without recognizing human rights law; they tend to put symbolism before substance. The class of “neurorights” does not fit in with established rights, it is tainted by neuroexceptionalism and neuroessentialism. They tend to promote rights inflationism and curb the scope of democratic decisionmaking without substantive justification. Moreover, at best half-baked proposal are a disservice to the cause. Other suggestions for policies on neurotechnologies might be easily dismissed by lawmakers, institutions, or other stakeholders when they formed an unfavorable opinion about such ideas in virtue of the present proposal. Most importantly, well-meaning but imperfect advice is neither helpful for politicians engaging with, and acting on such advice, nor for the people they represent. I therefore wish to suggest stopping the lobbying for neurorights.