To the extent that neurotechnologies embody somehow a claim that the mind may be open to view, they each raise ethics concerns relating to a range of issues, including mental privacy. Relating to this too, is a concern over the reduction of mental states to sets of neural data. We will get into more detail on these and the related areas of cognitive liberty and self-conception. Before delving into these functional issues arising from the use of neurotechnology, something should be said about the presentation of neurotechnology.
Outside the research lab, there is a variety of BCIs already available commercially, including products like Cyberlink, Neural Impulse Actuator, Enobio, EPOC, Mindset (Gnanayutham and Good 2011). The potential prospects for applications based on these types of technology are interesting (Mégevand 2014). However, the plausibility of technological claims ought to be carefully scrutinised.
While the detection of neural signals is in principle easy, identifying them is difficult (Bashashati et al. 2007). A lot of research effort aims at improving detection and recording technology. This should help to improve the prospects for identifying recorded neural signals. Identification is centrally relevant to mind reading in that the signals recorded must be correlated somehow with mental states. It is ethically relevant too, not least owing to the prospects of misidentifying mental states via inappropriately processed brain recordings, or through misrepresenting the nature of the recording taking place.
Brain signals can be sorted into types. Recording sites can be classified in functional ways—visual, motor, memory, language areas, for example. That types of signals in specific areas appear to be ‘behind’ our conscious activity suggests that activity ought to be classifiable in a quite objective way. At least some neurotechnological development paradigms would suggest that this was the case: claims have been made about the kinds of technologies discussed above as ‘accessing thoughts’, ‘identifying images from brain signals’, ‘reading hidden intentions’ (Haynes et al. 2007; Kay et al. 2008). Attending to the brain signals means getting to the mental content, these claims suggest.
But this may be a case of overclaiming. It seems as if a great deal more information than is captured through measuring brain signals is required if meaningful inferences about thought content are to be drawn from them. For example, Yukiyasu Kamitani carried out experimental work aimed at ‘decoding dreams’ from functional magnetic resonance imaging (fMRI) data. Media reports presented this work as if dreams were simply recorded from sleeping experimental participants (Akst 2013; Revell 2018). But in reality, 30–45 h of interview per participant was required in order to classify a small number of objects dreamt of. This is impressive neuroscience experimentation, but it isn’t it just a ‘reading of the brain’ to ‘decode a dream’. Interview is an interesting supplement to brain signal recording because it specifically deals in verbal disclosures about the experience of mental states.
When it is reported that Facebook or Microsoft will develop a device to allow users to operate computers with their minds or their thoughts (Forrest 2017; Solon 2017; Sulleyman 2018), this is perhaps a too-extravagant claim. While many consumer devices are marketed as ‘neurotechnology,’ it is implausible that they actually operate via detecting and recording brain signals (Wexler and Thibault 2018). Far more likely is that such devices will respond to electrical activity in the muscles of the face, the signals in which are maybe 200 times as strong as those in the brain, and much more closely positioned to the device’s electrodes. In all likelihood, doing something like typing with such a device exploits micro-movements made when thinking carefully about words and phrases. Muscles used in speaking those words are activated as if preparing to speak them, hence corresponding to them in a way that can be operationalised into a typing application. Indeed, this is the stated mode of operation for Google’s ‘AlterEgo’ device (Kapur et al. 2018; Whyte 2018).
Overclaiming is an ethical issue as it can undermine confidence in neurotechnologies in at least two ways: failing to deliver by misrepresenting technologies, and serving to raise undue hopes and concerns. This builds on a misleading representation of how a device works, and its prospects as an effective technology. There are ethical implications from this in terms of user consent in using a device. There may be varying degrees of deception at work, given this sort of misrepresentation, that could affect how we ought to consider the potential uptake and use of devices, whether by experimental participants or consumers.
Drawing on the dream decoding example, we have reason to think that the objective recording of brain signals is insufficient as an account of a mental state precisely in that it has no experiential dimension. Thoughts occur within an internal model of the world from a particular point of view. This model cannot be straightforwardly generalised from subject to subject based on brain signal observation. Only specific dimensions of this model can be inferred, limited in terms of predictability, and only after large amounts of training in contexts of rigorous research conditions. The objective promise of recording brain signals might be exactly what cuts them off from the mind, which includes a subjective perspective.
The possibility of a too-zealous reduction of the mind to some neural data arises here as an ethical concern. ‘Mental’ concepts can bear discussion without reference to ‘neuroscientific’ concepts (also vice versa). How each might relate to natural kinds is an open question (Churchland 1989). There is therefore a ubiquitous question of interpretation to be remembered, as the interplay between mind and brain is considered. The thought-experiment of a ‘cerebroscope’ serves to highlight this.
The cerebroscope is a notional device that records all activity of all neurons in the brain on a millisecond by millisecond basis. With this total representation of neural activity, the question is whether we have a representation of the mind. Steven Rose suggests not—the nature of the brain as an evolving, plastic entity, means that millisecond by millisecond resolution of neural activity is not intelligible without a total map of the genesis of those active neurons and their connections:
…for the cerebroscope to be able to interpret a particular pattern of neural activity as representing my experience of seeing [a] red bus, it needs more than to be able to record the activity of all those neurons at this present moment, over the few seconds of recognition and action. It needs to have been coupled up to my brain and body from conception—or at least from birth, so as to be able to record my entire neural and hormonal life history. Then, and only then, might it be possible for it to decode the neural information. (Choudhury and Slaby 2016, p. 62ff)
We should be careful in considering these sorts of issues when it comes to thinking of mind-reading. It might be thought that the mind is akin to a space through which a putative mind-reader could walk, and examine what is to be found in there. But Steven Rose’s point suggests a more situated kind of mind, reliant upon its genesis as well as its state at some moment in time. The point being made is that even were one to perceive the thought of another somehow it could only be understood as a subjective thought, not as an objective thought had by another.
Relatedly, Mecacci and Haselager (2019) discuss some philosophical ideas that relate to the privacy of ‘the mental’. They describe a perspectivalism from A. J. Ayer regarding mental states, prioritising the privacy of the mind and its contents. Such a view would also appear to rule out mind-reading as they require a particular perspective, meaning they appear not as objects in a mental space potentially open to view, but private contents of a specific mind.
Misrepresentation of technology, and reductionism, each appear to be dimensions of ethical importance in themselves. But a little more analysis of each shows them to lead to a broader set of ethical issues in neurotechnology. Where mental privacy is threatened, cognitive liberty may suffer. ‘Cognitive liberty’ includes the idea that one ought to be free from brain manipulation in order to think one’s own thoughts (Sententia 2006). This concept often arises in the context of neuro-interventions in terms of law or psychiatry, or neuroenhancement (Boire 2001). Here, it is most salient in connection with a potential loss of mental privacy.
Where mental privacy is uncertain, it is not clear that someone may feel free to think their own thoughts. Where measurements of brain activity may be taken (rightly or wrongly) to reveal mental contents, neurophysiology itself could be seen as a potential informant on thought itself. This would be to uproot very widely assumed notions about a person’s unique and privileged access to their own thought. If a keen diarist was to become aware that their diary could be read by another, they might begin to write less candid or revealing entries. If anyone became sure that measurements of their brain might reveal any of their mental contents, how might they refrain from having candid and revealing thoughts? This would amount to a deformation of normal ways of thinking, in rather a distressing way.
With this distressing possibility, the very idea of self-conception too is threatened. Where mental privacy concerns lead to inhibition of cognitive liberty, it would not be certain that one might feel free to reflect upon values, decisions, or propositions without threat of consequences. Considering ethically dubious thoughts, even if one considered them only to develop ways to refute them, might become dangerous where the content of the thought might be read from the activity of the brain. Faced with technology that appears to read minds, it seems ethical risks are posed by that technology in representing the mind as open to view.
Part of what it is to have a mind, and to be an agent at all, able to act on one’s reasoned opinions, includes reflection. This might mean that we wish to consider things we wouldn’t do, run through options we may disdain, or otherwise wish to reject. If we were to find ourselves in a context where mental contents were thought of as public, this reflective practice could suffer. Especially where such mental data might be held to be a more genuine, unvarnished, account than one offered in spoken testimony. This might build upon the principle at stake in the ‘guilty knowledge’ example from above. A chilling effect on thinking itself could materialise owing to the possibility of very intimate surveillance via brain recording.
The mediation of thoughts, ideas, deliberations, into actions is part of autonomous agency and self-representation. The potential for indirectly representing such things in one’s action is part of what makes those actions one’s own. Where a mind-reading device could be imagined as ‘cutting through’ the mediation to gain direct access to mental contents, this would not necessarily make for a more accurate representation of a person. Nor might it underwrite a better explanation of their actions than an explanation they might volunteer. At the heart of this is the privacy of mental activity, and the space this allows us to deliberate. Nita Farahany has called this a ‘right to cognitive liberty’ (Farahany 2018).
The privacy of deliberation is very important in providing room for autonomy, and substance for agency. As has been mentioned, inner mental life can be characterised to a greater or lesser extent through one’s behavioural cues. The difference between the reluctant carrying out of a task, as opposed to an enthusiastic embracing of the same is often fairly obvious. But indirect assessments of someone’s state of mind in their activities is a familiar, fallible, and well-established interpersonal practice. The idea that objective data might be used to directly characterize an attitude, once and for all, serves to undermine the role of agency. A decision to act represents a moderation of impulses, reasons, desires. If a mind-reading device were deployed it would represent a claim on the real state of a person’s mind, certainly. But this could serve to downplay that person’s action as more complex than simply the outcome of a neural process.
Thinking of the cerebroscope example, this is akin to the decontextualisation of neural recordings discussed there. The nature of the signals represented may make little sense outside of a biographical story. They may be likely, thereby, to misrepresent the person recorded. The fact that extensive testimony played such a large part in the dream reading experiment seems to back up this thought-experimental conclusion.
More broadly, it is important to discuss the purpose to which mind-reading devices are put. For instance, wearing a cast on a broken arm displays some dimensions of a person’s physiological state. However, this is of low concern because no one might gain from wanting to ‘steal’ such information. But what is problematic is the potential for the misuse of people’s thoughts, choices, or preferences as inferred from neurotechnology. Even if thoughts are not accessible by a technology, but it is possible that they are taken to be so, ethical issues arise. With a commercialisation of neurotechnology as ‘mind reading’ technology, these potentialities multiply as technology may be deployed where there is no particular need. This leaves open a question about what purposes the technology may be used for, and by whom. A potential diversity of technologies, actors, purposes, and stakes make for a complex picture.
The socio-political ramifications of widespread neural recording could be deep. From these recordings, detailed predictions can be made about private, intimate aspects of a person. For those with access to it, this data will be a valuable asset. Facebook’s intended brain–computer interface, permitting seamless user interfaces with their systems would not only record and process brain signals, but associate the data derived from them with detailed social media activity (Robertson 2019). This would represent a valuable resource, providing rich links between overt actions and hitherto hidden brain activity. This kind of detailed neuroprofiling will likely be taken to be as unvarnished and intimate an insight into a person as it is possible to acquire. To the extent that this is accurate, new dimensions of understanding people through their brains might be opened. As with the political micro-targeting scandals involving Facebook and Cambridge Analytica, this data can also enable personal manipulation, as well as social and political damage (Cadwalladr and Graham-Harrison 2018).
At the personal level, databases that associate not only behavioural, but also brain data, represent serious risks for privacy and wider dimensions relating to dignity. The kinds of profiling they would enable would risk marginalising individuals and groups, while eroding solidarities among diverse groups. This happened in the run up to Brexit, based in covert psychometric profiling, and has had lasting social damage (Collins et al. 2019; Del Vicario et al. 2017; Howard and Kollanyi 2016). Targeting information at specific individuals or groups based the neural data would represent a new front in data-driven marketing or political campaigning, enabling novel, more sinister, and perhaps harder to deflect, forms of manipulation (Ienca et al. 2018; Kellmeyer 2018).
These examples focus upon how information can be leveraged for specific effects. Where neuroprofiling converges with advancing technology, direct neural-based manipulation also arises as a potential concern. Among the types of neurotechnology already available for research and for consumer purposes are those that use brain data to control software and hardware, those that display data for user’s purposes as neurofeedback, and those that seek to modify brain activity itself. These neurostimulation or neuromodulation devices use data derived from the brain to modulate subsequent brain activity, regulating it according to some desired state (Steinert and Friedrich 2019). This is quite a clear challenge to autonomy. Outside of an ethically regulated context such as that of a university research lab, this ought not to be taken lightly. Market forces are not self-evidently sufficient for ensuring the responsible marketing, and use, of such potentially powerful devices.
The kinds of concerns being discussed here are not based in mind-reading per se, but rather in effects likely to occur in the context of widespread neurotechnology use. Beyond the market context however, in the realm of ongoing research, at least one sort of mind-reading might appear to be technically possible in a limited sense at least. Following analysis of this case, we will be well placed to take a position on the ethical concerns that have arisen across a variety of applications from those where mind-reading is not the central effect to one in which it would be most likely.