, 26:377

Beyond the responsibility gap. Discussion note on responsibility and liability in the use of brain-computer interfaces


    • Philosophisches Seminar, Forschungsstelle Neuroethik/Neurophilosophie, Johannes Gutenberg-Universität Mainz
Open Forum

DOI: 10.1007/s00146-011-0321-y

Cite this article as:
Grübler, G. AI & Soc (2011) 26: 377. doi:10.1007/s00146-011-0321-y


The article shows where the argument of responsibility-gap regarding brain-computer interfaces acquires its plausibility from, and suggests why the argument is not plausible. As a way of an explanation, a distinction between the descriptive third-person perspective and the interpretative first-person perspective is introduced. Several examples and metaphors are used to show that ascription of agency and responsibility does not, even in simple cases, require that people be in causal control of every individual detail involved in an event. Taking up the current debate on liability in BCI use, the article provides and discusses some rules that should be followed when potentially harmful BCI-based devices are brought from the laboratory into everyday life.


Brain-computer interfaceResponsibility gapShared controlLiabilityNeuroethics

1 Introduction

Some years ago, concern began to emerge regarding the responsibility gap that apparently arises with the use of many new technological strategies (e.g., Matthias 2004). Intelligent machines and robots, the assumption goes, might do things that cannot be said to be under the user’s control and, therefore, the user cannot be held responsible for possible damage caused by these devices.1 This argument has also been applied to the use of direct brain-computer interfaces (BCI) (e.g., Lucivero and Tamburrini 2008, 457).

BCIs use the electromagnetic potentials the brain produces (EEG/EMG based BCIs) or the interaction of electromagnetic waves with physiological processes inside the brain (fMRI, NIRS) to control target devices. BCIs consist of four essential parts responsible for the acquisition of signals, the extraction of particular features, the translation thereof, and the device output (Wolpaw 2002; Mak and Wolpaw 2009). A target device or ‘actuator’ (e.g., a spelling device, prosthesis, wheelchair, robot, or gaming apparatus) is connected to the interface. At the moment, the use of BCI-based technology is still at the experimental stage. The research is mainly driven by the great hope to make such technology viable in everyday life to assist impaired individuals; meanwhile, BCIs have in principle shown their potential to be useful tools in therapy, rehabilitation, and assistive technology (Millán 2010; McCullagh et al. 2010; Mak and Wolpaw 2009; Daly and Wolpaw 2008; Birbaumer 2005, 2006; Wolpaw 2007). However, applications for average people also fall within its scope, for instance in the fields of entertainment, computer-assisted jobs, and even the military. Bearing in mind the technology’s broad potential, an early clarification of responsibility issues is highly relevant.

So, what is the background for bringing together the responsibility-gap argument and BCI technology? BCIs are as follows: (1) rather opaque interfaces and their interaction routines are vulnerable to the change of their conditions over time and even to minor differences in the set-up of hardware components in everyday life (e.g., the EEG electrodes used in non-invasive procedures). In some cases, (2) the target device controlled by the BCI is able to learn and execute some minor activities by itself (e.g., the adjustment of a wheelchair’s course or the prevention of collisions). And (3) because the brain signal in different BCI applications is sometimes used to initiate a single action (via internally evoked potentials, e.g., motor imagery) and sometimes used to steer a process generated by the machine (via externally evoked potentials, e.g., P300), the amount of control is divided differently between man and machine. As a result of these three aspects, the user’s control over the device via the BCI is shared with a complex machine that is not transparent to the user. This fact of ‘shared control’ has led some to tentatively ask whether the use of a BCI device can be called ‘autonomous’ in a proper sense and whether the user can be deemed responsible for the ‘actions’ of the device as a whole (Lucivero and Tamburrini 2008, 457).

But recently several authors have denied that there is in fact a responsibility gap in BCI use and declared that potential trouble can be dealt with according to standard ways of reflection and extant legal regulations. When, e.g., Haselager et al. (2009, p 1352) just “agree” with the respective statements in Clausen (2009) without further discussion or qualification, it seems that it has become common sense today that BCI technology is not touched by the responsibility-gap problem. However, up to now none of the authors who have—and I think rightly—denied the relevance of the responsibility-gap argument for BCI technology has (1) provided an explanation for the prima-facie plausibility of the responsibility-gap argument concerning BCI technology (an argument that still haunts the real BCI community) or (2) provided an ethical account for the final failure of that argument. Clausen (2009) alluded to the law and mentioned the usual routines of weighing risks against benefits in bioethics. Tamburrini’s (2009, 142–144) arguments are much broader here. He claimed that in the case of BCI use, there might indeed be situations in which nobody is morally responsible for a damaging event and that, therefore, the whole issue should be translated into terms of legal liability. The appropriate legal regulations could, then, be established according to traditional legal maxims and already existing rules. While fully accepting these consequences, my concern with this argument is that it is unavoidable to imply a judgment regarding moral responsibility when imposing legal liability on somebody. When saying that practice B can be (legally) regulated in accordance with practice A, one has to show that it is ‘right’ or ‘just’ to do so. And this requires a moral assessment of practice B. One has to make clear in which aspects A and B can be called similar from a moral point of view before one adopts similar legal regulations for them. The meaning of having fixed rules of liability is not having rules instead of moral assessments, but having transparent and timesaving routines based on moral judgments usable in complicated situations.

Therefore, the first aim of this article is to show which aspects of BCI use the responsibility-gap argument acquires its plausibility from and—on the other hand—that a lack in causal control does not hinder the ascription of (moral) responsibility. To reach this aim, I will distinguish between the descriptive third-person perspective and the rather evaluative first-person perspective in talking about BCI technology.

The second aim of this article is to start with the concretization of rules and regulations that might be used to deal with BCI technology in terms of (legal) liability. I will propose some main rules and discuss and specify them in line with possible difficulties and restrictions in their application.

2 The descriptive third-person perspective

People are responsible for their actions. Actions—to be called ‘actions’ in a proper philosophical sense– are carried out consciously and voluntarily, have to be initiated by the actor and could on principle also be refrained from. Now, the responsibility-gap argument, applied to BCI technology, acquires its plausibility from a description focusing on signal acquisition procedures in BCIs from a third-person perspective. This would usually be the professional perspective of scientists and engineers. This description, then, is contrasted with the necessary characteristics of actions.

So, first let us follow the signal ‘production’ in BCI use from this technical point of view. The matrix (Table 1) shows the particular aspects. If one compares other technologies to BCI devices, one has on the one hand an interface (button, buzzer, keyboard, lever) that is controlled by voluntarily initiated movements the user is (at least potentially) conscious of. On the other hand, BCIs are among a certainly small number of interfaces that capitalize directly on unconscious processes of the physical body (tracking eye saccades would be another example). The signals used for BCIs are, as such, in principle and in any application not conscious to the user of the BCI.2
Table 1

The descriptive matrix of signal acquisition in BCI


Internally evoked signals

Externally evoked signals






Not voluntary



Not possible




The functions of some BCI devices can be activated voluntarily by spontaneously thinking particular thoughts like ‘moving my left foot’ and the like (motor imagery).

In applications of this type, the electromagnetic signals used to control the target device can be seen as the unconscious correlates of the voluntarily ‘produced’ thoughts. This means that such internally evoked signals can be ‘made’ on the basis of the user’s own will. This is not possible in the case of externally evoked signals. In many BCI applications, e.g., P300 or error potentials, the ‘production’ of such signals is not voluntary in the sense that the user is not able to produce the relevant signal on the basis of his will. The signal just happens spontaneously as a reaction to external stimuli.3 In the latter case, the user of the device cannot refrain from ‘producing’ the signal. The whole state of interaction might then be characterized in the case of internally evoked signals as initiating and in the case of externally evoked signal as responding.

The consequence of adopting this perspective is that the use of BCI technology lacks—at least partially—important characteristics of actions. If the interface as such is totally hidden from the user’s awareness, one would indeed wonder whether moral responsibility is possible when using such technologies. In the light of this descriptive point of view focusing on signal acquisition, the responsibility-gap argument concerning BCIs has some plausibility.

3 The evaluative first-person perspective

The descriptive third-person perspective is, as science and technology have to be, focused on the causal chain of operating a BCI device. Seen from this perspective, the brain is more a component of technology (like a resistor or sensor) than an organ of a tool-using human being. And in the case that we are not (consciously and willingly) in causal control of a device, so the responsibility-gap argument presupposes, we cannot see the activities of the device as our actions and therefore we cannot apply the concept of responsibility. While in other technologies the competent user might follow his contributions until they reach the physical boundaries of the device, in BCI technology this is not possible. The interaction takes place in a consciously inaccessible realm between man and machine. Using conventional technology (car, drilling machine), it would be rather obvious who or what is to blame in the case of failure: perhaps, the user made a mistake; perhaps, the machine was out of order. In BCI technology, this obvious causal chain between user and device is blurred. And in the case of intelligent target devices, even another ‘agent’ comes into play as a source of causation different from the user.

The task is now to show that despite the above-described features of BCIs, responsibility ascription in BCI use is nevertheless possible and that there is, even in the case of very simple everyday actions, no link binding moral responsibility to direct causation. To start with, one might recognize that in everyday life, responsibility ascription is done on the basis of an interpretative first-person perspective. Only from this perspective does it make sense to talk about concepts like agency, intention, and (moral) responsibility. So, seen from the user’s point of view, the picture looks totally different.

Let us take a look at some examples far away from intelligent technology in order to illustrate this turn of perspectives. The following examples illustrate that somebody would easily be seen as responsible for an occurrence even if he is not alone or not at all in causal control of this occurrence: (a) Imagine somebody rowing on a river. The performance of the boat as a whole could be seen as an instance of shared control, shared between the rower and the stream. Seen from outside, it is not clear which of them is causal for the actual course of the boat. So, one might measure the different influences and reconstruct the course of the boat on this basis. By that we adopt the descriptive third-person perspective. But the rower himself would not ask for portions of causal control. He has the intention to reach a certain point and he controls by his contributions the outcomes of the performance as a whole, rather than the portions of causal effectuation. (b) A second example stems from a scene in Shakespeare (cf. Davidson 1971): When Hamlet’s father is killed, what does the queen actually do? What is the action she performs? Does she murder the king? Does she pour poison into the king’s ear? Or does she just turn her hand around? No doubt all the statements are correct in a certain sense. But only the first is a comprehensive and sufficient relation of the situation we would accept in a real-life context. The last and the second last statements talk from the descriptive third-person perspective only. And here we can find a gap in causation, too, because turning the hand around does not mean causally controlling the process of killing the king. Neither the flow of the poison nor its miraculous interaction with the king’s brain and the final halt of the king’s vital functions are, from that descriptive point of view, something that the queen ‘does.’ (c) A third and even more striking example are instances of actions that have absolutely no causal influence on an occurrence and the person carrying out those actions would nevertheless be held responsible for their outcome. This is the case with omissions. A person sitting by a lake seeing a child drowning nearby without intervening would be morally blamed and legally sentenced, although nobody would, from the descriptive point of view, be able to show any causal relation between that person and the child’s death.

All these examples show that our regular moral judgments and ethical assessments are far from requiring causal control or conscious awareness of every single part and aspect of processes leading to certain outcomes. From the descriptive third-person perspective, we actually never fully control anything. We are always only one small link in the causal chain and to reach an aim we depend on the ‘cooperation’ of other parts of the world. We might learn about them and gain experience interacting with them, but we do not control them in the literal sense. Therefore, in everyday life, being in causal control of a process exhaustively is not a precondition for the ascription of responsibility for this process. It is always a particular or general aim of a certain performance that the agent focuses on. And a well-practised (competent) agent has, at least potentially, conscious control of that performance as a whole.

Up to now, all this was said without any particular respect to intelligent technology. Now let us come back and apply these insights to BCI use. Is not the peculiar way failure might occur in BCI use a reason for rethinking the above statements? In BCI use, it may happen that a device does not perform according to the intention of the skilled user, although the integrity of the machine is untouched and the user behaves as usual. The records from an airplane-like black box might make it obvious that an electromagnetic wave occurred that was similar to that which is usually interpreted as the order to, say, drive the wheelchair to the left. The user, however, might rightly report that he did not intend to go to the left and did not think the correlative thought. How should that possibility be assessed? The constellation has lost its similarity with the above examples. It has now rather the character of riding a horse.4 The animal, provided by a trainer (the ‘engineer’), usually does what the competent rider (the ‘user’) would like it to do, but sometimes the regular course of interaction fails and the rider together with the horse causes some damage. However, I do not think that we should assess this situation in principle differently from the former. The only thing we have to keep in mind is that the agent is interested in and responsible for the whole performance (e.g., using a BCI wheelchair to go to a friend’s premises). We would fall back on the descriptive point of view when focusing on single instances of causation. Given that a user is skilled and familiar with a device, we can reshape the problem in terms of reliability and responsible use. By doing so, the focus shifts from a single and detached occurrence to a practice of using. And being responsible for this practice means dealing with risks in a (morally) responsible manner.

4 Some rules for BCI use

Technologies must be at least ‘mostly reliable’ because otherwise no competent use could be achieved by training processes. A performance permanently close to the trial-and-error rate could not be called competent use and such a device would never leave the lab. This might seem trivial, but it limits the problem remarkably: if BCI technology–because of immanent technical reasons or individual interactive constraints–could not be brought to a level of reliability that makes failure a rare exception, it would not be available for regular use. Given that the state of reliability is high enough, an assessment of risks could be done for every application type. In BCI-based photo browsers or spelling devices failures will not cause any damage and can be easily undone. Here, the user’s frustration tolerance is the benchmark for the decent minimal reliability that is required. In the case of BCI-based prostheses or wheelchairs, the requirements are remarkably higher. Research statistics from the laboratory would provide information about how many operations a day regular use implies and how often failures occur. Taking the alternatives for the user and the potential danger to him and to other people into consideration, a concrete figure of reliability expressed as a percentage could be defined for each application as a minimal requirement. For instance: a hand prosthesis or orthosis that drops a cup of coffee every week might appear not very reliable. Taking into consideration that the user is enabled by the device to eat without help every day, this risk might nevertheless be tolerable. On the other hand, a wheelchair that crashes into passers-by on the road once a week would be problematic even if it enables the user to go out without help every day.

So, the type and the probability of trouble is among the possible outcomes an agent is aware of when (competently) using a BCI device. Taking such risks has a genuine aspect of moral responsibility (prior to all legal liability), especially in all cases when others can be affected by one’s own decision.

Depending on the severity of the harm potentially caused by unintended performance of devices, the use of some BCI applications might require special measurements. When thinking about such measurements we should try to avoid unrealistic scenarios. The reason that BCI technology today is rather a research issue or a matter of limited clinical use is that the technology is in most of its applications not yet reliable enough for everyday use. It is still possible that the envisaged applications under research will never reach a state of stability, robustness and usability to be given to everyday life. Therefore, we should not dramatize the situation but make rules for the case that BCI technology really reaches a level of appropriate reliability. We should also bear in mind that for most of its prospective users BCI technology would be a further, alternative option (an ‘additional channel’). The use of BCI communication devices in locked-in patients is of course a special case that needs additional care,5 but this scenario should not be taken as the standard model for BCI use in general. In the following section, I will name some of the main rules and then try to specify and concretize them by bringing in some limitations to their general application. This will be done in order to create a starting point and to invite further debate.

In all cases where harm to people is possible,

  1. 1.

    The device has to reach a very high level of reliability.

  2. 1.1

    Decent minimum reliability rates have to be defined.

  3. 1.2

    Remaining risks have to be documented in terms of type and probability.


While many other devices are rather ready-made things, controlled by consciously made input, a BCI device usually needs a calibration of the interface.6 So, the engineer might provide the physical device but it is the personal form of interaction between a particular person and a particular machine which would in the end determine the reliability and, therefore, the concrete risk. Concerning the duties of the engineer, reliability rates would be rates that can be achieved with professionally calibrated devices and competent users. Therefore, having formal requirements for the physical parts of the technology alone would be incomplete in view of the special type of interaction in BCI use.

  1. 2.

    User training needs formal standards and the user’s competence has to be documented by a final examination, resulting in the license to use the device in everyday life.


Different to a driving license, this license would be valid for a specific device, not for BCI-based devices in general. Now we have formal requirements for the whole interaction between a human being and a machine. However, this interaction itself is vulnerable and can change or degrade over time. Therefore, we need a time factor in this rule.

  1. 2.1

    This license has to be renewed on a regular basis.


Several users, such as for instance lower-limb amputees or patients with a (low) spinal-cord injury, would probably be able to set up BCI devices by themselves. Others are in principle not able to do so, but depend on their care givers. In any case, the set-up procedure needs to fall under the umbrella of formal regulation.

  1. 2.2

    A person who sets up a BCI device needs standardized training, his/her competences have to be examined, and a license is needed.


If users set up their devices by themselves, this would just be part of their general training, examination, and license.

We now have regulations concerning the physical device and the competences of users and carers. The subjects of responsibility as well as of liability would be engineers on the one hand and AT professionals on the other. What we still need is a procedure that informs the user about risks and potential harm he/she might cause—even if he/she behaves competently and the machine is in order. The user needs to understand that this remaining risk is implicit in the special kind of interface a BCI is and that it is his/her responsibility to behave in such a way that sudden malfunction causes no or only minimal damage. A user who decides to use BCI technology has to be informed about the average frequency of unintended effects and has to agree to take those risks. And by taking the risk, he is responsible for the use of the device, because he knows about the possibility and scope of side effects.

  1. 3.

    The user has to be given formal instructions on the type and the probability of remaining risks and damages the device might cause in different situations. The prospective user has to consent to a protocol that documents this instruction and by that to take over responsibility for the appropriate use of the device and liability for potential damages.

  1. 4.

    Users need to have liability insurance.


It has to be clarified with the insurance companies whether regular liability insurance covers the use of BCI-based devices too. Otherwise users would need a special insurance policy, similar to that required for driving a car.7

With these four (main) rules and their specifications, we would have a frame to (a) ascribe responsibility to the different agents engaged in BCI use in line with our moral intuitions; and we would (b) charge liability on those agents in line with already existing regulations. In many other cases, when no or only minor harm can be expected from unintended effects of BCI use, it is just up to the user to weigh minor risks against benefits and to accept or to reject the technology. There seem to be no special moral problems involved in those ‘minor’ cases, and formal regulations are not needed.


Actually, the responsibility-gap problem is not new at all. Responsibility issues arose and were discussed intensively in the early 19th century in connection with a typical accident of the period: steam engines exploding (Bayertz 1995, esp. pp 24–29). The problem was that there was nobody who had really caused these explosions. At first glance, neither the supplier, nor the workers, nor the owner of the factory could be blamed for having caused such accidents.


There is an opinion floating around that BCI signals cannot be said to be unconscious in the sense it is done here. Some researchers have suggested that patients trained in manipulating their slow cortical potentials (SCP) learn to be aware of their brain states (Kotchoubey et al. 2002). The researchers have shown that with the ability to control SCPs via a feedback device, the ability to assess the success of one’s own performance without having the feedback from the device increases as well. They inferred, by ruling out other explanations, that these patients perceive their SCPs directly. This would be to say that people can be directly conscious of signals a BCI might use. From my point of view, this explanation is not compelling. The ability to ‘produce’ something and to assess one's own performance in doing so does not automatically imply the conscious perception of the something produced. There is nothing hindering an interpretation of the experiments saying that the subjects have a conscious awareness of their performances rather than of the entities they manipulate. And this is more than claiming that they are aware of their alertness or concentration, as Kotchoubey et al. (2002, 109) suggested. Therefore, the signals used in BCIs, addressed from the descriptive third-person perspective, are legitimately taken to be and to stay unconscious.


Of course, the user focuses these target stimuli voluntarily and therefore ‘not voluntary’ does not mean involuntary in the sense of ‘against the will’.


Tamburrini (2009) proposed dogs or children as models. I take horse riding because of the unit rider and horse form and it is the performance of the unit that is of special interest here.


Communication for legally relevant statements for instance would need procedures to double-check the correctness of the patient’s responses and to limit the danger of mistyping as Tamburrini (2009, 142) is afraid of. One might think of the common practice of having to type a code or password twice when opening a new account on the internet.


This might hold for other assistive technologies as well. It is not claimed that BCI technology is absolutely unique here.


This could—at least for the nearer future—become a major problem. There is some evidence coming from eye-tracker-controlled wheelchairs: Up to now, companies have obviously refused to provide such devices because of their awareness of the remaining risks and potential high damage that could be caused by malfunctions. One might infer that BCI devices of that sort would face similar hesitations by factories and insurance companies as well. (Thanks to Michael Tangermann for providing me with this information.).



This work is supported by the European ICT Programme Project FP7-224631. The paper reflects only the author’s views, and funding agencies are not liable for any use that may be made of the information contained herein.

Copyright information

© Springer-Verlag London Limited 2011