We identify four intimately interconnected areas that require attention and proactive governance to ensure the safe and responsible use of brain data outside of the biomedical domain:
Gaps in Supranational and International Law
No mandatory governance framework focused on brain data currently exists in supranational or international law. Prima facie, brain data are personal data, as codified inter alia in the legally binding European Union’s General Data Protection Regulation (GDPR), the non-binding 2013 OECD’s Privacy Guidelines and the upcoming Council of Europe’s (CoE) Modernized Convention for the Protection of Individuals with Regard to the Processing of Personal Data, and the European Convention on Human Rights (ECHR), particularly Article 8. Under these instruments, personal data are defined as any information related to an identified or identifiable natural person (Art. 4 GDPR; Art. 1 OECD Privacy Guidelines, Art. 2a CoE).
The right to privacy, enshrined in Article 8 ECHR, includes the right to data protection. Art. 8 ECHR protects sensitive information, which includes personal data revealing, for example, political opinions, information about a person's health, racial origin, or sexual orientation. With respect to genetic and biometric data (e.g., cell samples, voice samples), the ECtHR found that, due to rapid technological developments, it is not possible to anticipate and understand all the adverse effects that the collection of such data may entail with respect to private life, and that therefore the collection of any genetic or biometric data constitutes per se an interference with Art. 8 ECHR. The ECtHR might follow a similar approach with regard to brain data.
However, there are a number of limits with this definition of brain data as personal data as defined by GDPR. Firstly, the GDPR is not applicable if brain data are anonymized even though the technical difficulty of anonymizing brain data leaves open the potential for re-identification. Research shows the feasibility of re-identifying data subjects based on electrophysiological measurements or neuroimaging data, predicting present emotional states and future behavior from brain data, as well as decoding information either from the neural activity of data subjects or their digital phenotypes [24, 32]. Because of the technology involved in the processing of brain data and its high contextualization, the likeliness that anonymized brain data (or data thought to be anonymized) will become re-identifiable is non-negligible.
Secondly, unique characteristics of brain data pose challenges to safeguarding the rights of data subjects. A prominent example is the right to be forgotten, i.e., one’s right to request a data controller to delete his/her personal data. A key characteristic of brain data is that they are potentially re-identifiable and elude conscious control. Therefore, even if a person is initially able to have their data deleted, the data controller or others might use those data to derivatively reconnect them to the person concerned. Most importantly, in the case of brain data involving ‘unconscious’ information, the data controller might be able to retain data the individual is not aware of. Finally, data deletion by consumer BCI companies may be difficult to obtain due the impact that such erasure would have on the accuracy of predictive models .
Thirdly, the GDPR allows derogations to the rights of data subjects if data (including the special categories of data listed in Article 9 (1) GDPR) is processed for research or statistical purposes. Those research exemptions also apply to research conducted by private companies, as pointed out by Recital 159 to the GDPR, which names “privately funded research” as part of the science privileged by the GDPR. This implies that processing of brain data by both public and private actors (e.g., government agencies or consumer neurotechnology companies), may rely on derogations from the main GDPR rules. Nevertheless, it is unclear under which conditions the research exemption for the purpose limitation principle defined in Article 5 (1) (b) GDPR applies to brain data collected in the consumer context.
Further, brain data may undermine another principle of data protection law, namely purpose limitation. By default, any personal data (including health data) can only be collected for specific purposes that need to be specified at the time when consent is given by the data subject or other legal basis is drawn on, that means ahead of starting data collection and processing. However, the exact specification of purposes is very difficult for brain data because current technology cannot pre-emptively discern purpose-specific data from the myriads of brain signals. Tools for selective filtering such as the Brain-Computer Interface Anonymizer are in early stages of development . The GDPR allows framing purposes in a broader manner in specific cases. Nevertheless, data security measures that intend to balance risks for the rights and interests of the data subject and the interests in the data processing are difficult to define in case processing purposes are framed in a broader manner, such as based on broad consent for scientific research (recital 33 GDPR) or based on the processing for scientific research purposes, Art. 9(2)(j) GDPR in conjunction with Art. 89(1) GDPR. Last but not least, the GDPR introduces the fiction that secondary processing for scientific research purposes is compatible with the initial purpose (Art. 5(1)(b) GDPR). Arguably, commercial scientific research, as any other research, underlies transparency obligations that are higher if for-profit benefits are gained based on research conducted with the data.
Finally, safeguards provided by data protection law may not adequately scale to group-level data. This lack of adequate scaling raises a twofold group-privacy risk: first, third parties can make inferences about a group of data subjects based on one or multiple features inherent in the brain data and shared by all individuals in the group (e.g., slower reaction time to cognitive tests). Second, individuals could be unwittingly identified through their brain data, however anonymized, as part of a hitherto unsuspected group (e.g., people showing prodromal signatures of cognitive decline) and subsequently discriminated against.
To complicate things, brain data generated from consumer neurotechnologies may not constitute ‘health data’ hence are subject to lower protections compared to data from clinical applications because the application of these devices does not fall under medical device regulation regimes .
Gaps in Ethics and Soft Law
Gaps in Responsible Innovation
Currently, most applications that collect and process brain data outside the clinical and medical research context do not seek compliance with the EU Medical Device Regulation (2017/745) or approval from the US Food & Drug Administration (FDA). Approval from these agencies is only necessary for software and devices with a medical purpose. This bypassing of the relevant medical device regulation is generally predicated on the non-medical scope of these devices and programs. However, a further challenge arises: even though brain stimulation products are covered under Annex XVI, No. 6, the Regulation does not cover brain data processing for purposes other than neuromodulation. We call for expanding the purview of this regulation as to include devices with which users (including vulnerable individuals and groups) may share their brain data for non-medical yet health-related purposes, such as cognitive monitoring and mental wellbeing. Such devices are currently not classified as medical devices and are regularly marketed for wellness, relaxation and other non-medical purposes. They also do not fall under the scope of application based on Annex XVI of the MDR as they often do not include brain stimulation. Furthermore, providing increased guidance for users through clear labelling of such products as not suitable for health-related and medical purposes could enhance transparency and contribute to the fulfilment of information obligations. Finally, consumer and military neurotechnologies can collect medically relevant parameters (e.g. via EEG measurements) and often claim to draw inferences about cognition or psychological wellbeing. Many wearable devices and applications are available for commercial, personal and even health-related use without relevant labelling required by data quality standards . Typically, users of consumer neurotechnology devices or services have no information about how in-house brain function databases are compiled. Further, users have no guarantee that such databases are sufficiently representative to provide valid assessments of individual or group-level cognitive function and affective state . Insufficiently validated applications may incorporate bias, provide false information or even cause harm to the users such as when users make health-related decisions based on these apps. Additional hazard may be posed by malicious hacking, eavesdropping, unauthorized access by third parties, unsecured data transmissions, re-identification of anonymized data and identity theft. Some of these risks also extend to the clinical and biomedical research field.
Neurotechnological devices that are deliberately developed to fall outside of medical device regulations, are often marketed as direct-to-consumer products. Therefore, they fall under the purview of consumer protection laws and regulation. However, current consumer protection (e.g. in the EU and the US) is a legal patchwork that may often allow companies to find regulatory loopholes . Therefore, lawmakers and regulatory agencies should jointly work on defining a clear set of regulatory approaches to consumer neurotechnology devices that apply within markets and may be harmonized across international markets.
An important step towards innovation governance was recently marked by the Recommendation on Responsible Innovation in Neurotechnology, which was adopted by the OECD in December 2019, setting the first international standards for responsible innovation in this domain .
Gaps in International Human Rights Frameworks and Further Lacunae
Human Rights instruments, such as the UN’s Universal Declaration of Human Rights (1948), which is legally binding as part of customary international law,Footnote 1 were drafted long before brain data became measurable outside the clinic and amenable to big data analytics. Given this, they did not explicitly spell out requirements for gaining access to and using brain data in a manner that protects individual rights. Whereas the conditions for legitimate use of human genetic data have been delineated in UNESCO’s soft law International Declaration on Human Genetic Data (2003), human brain data remain without explicit safeguards and lack comparable protection by human rights instruments. In response to this, scholars have called for expanding the existing human rights framework as to explicitly include rights that are purposively designed to protect the brain and mind domain of a person, hence called neurorights. These rights can be seen either as evolutionary interpretations of existing rights or as new rights. Further, they constitute both rights in the legal sense (in accordance with international human rights law) and in the philosophical sense (in accordance with right-based moral philosophy) .
Further, there is no specific international treaty that addresses the dual-use or potential weaponization of brain data for military purposes. Dual-use research and technology collecting human brain data is therefore a pressing anticipatory governance concern as neurotechnology evolves and is increasingly researched in the military setting.