Connecting human minds to various technological devices and applications through brain-computer interfaces (BCIs) affords intriguingly novel ways for humans to engage and interact with the world. Not only do BCIs play an important role in restorative medicine, they are also increasingly used outside of medical or therapeutic contexts (e.g., gaming or mental state monitoring). A striking peculiarity of BCI technology is that the kind of actions it enables seems to differ from paradigmatic human actions, because, effects in the world are brought about by devices such as robotic arms, prosthesis, or other machines, and their execution runs through a computer directed by brain signals. In contrast to usual forms of action, the sequence does not need to involve bodily or muscle movements at all. A motionless body, the epitome of inaction, might be acting. How do theories of action relate to such BCI-mediated forms of changing the world? We wish to explore this question through the lenses of three perspectives on agency: subjective experience of agency, philosophical action theory, and legal concepts of action. Our analysis pursues three aims: First, we shall discuss whether and which BCI-mediated events qualify as actions, according to the main concepts of action in philosophy and law. Secondly, en passant, we wish to highlight the ten most interesting novelties or peculiarities of BCI-mediated movements. Thirdly, we seek to explore whether these novel forms of movement may have consequences for concepts of agency. More concretely, we think that convincing assessments of BCI-movements require more fine-grained accounts of agency and a distinction between various forms of control during movements. In addition, we show that the disembodied nature of BCI-mediated events causes troubles for the standard legal account of actions as bodily movements. In an exchange with views from philosophy, we wish to propose that the law ought to reform its concept of action to include some, but not all, BCI-mediated events and sketch some of the wider implications this may have, especially for the venerable legal idea of the right to freedom of thought. In this regard, BCIs are an example of the way in which technological access to yet largely sealed-off domains of the person may necessitate adjusting normative boundaries between the personal and the social sphere.
A novel facet in the merger of humans with technological artifacts is brain–computer interfaces (BCIs), which connect the human mind to machines or applications. BCI technology has seen rapid developments in recent years and promises a wide range of practical uses in the near future. In essence, BCIs afford connecting brains to computers in order to affect the external world. BCIs record neuronal signals of users as inputs and process and transform them into outputs to command a device. Spectacular examples include the so-called brain orchestra, where BCI technology was used to perform a musical piece in front of a live audience (Palmer 2009) and the control of a drone quadcopter via BCI (LaFleur et al. 2013). BCI applications also allow for novel forms of control in computer gaming (Ahn et al. 2014) and of mobile devices such as tablets (Rojahn 2013). At the 2014 soccer world cup in Brazil, a paralyzed man delivered the kickoff via a BCI-controlled exoskeleton. Facebook, Elon Musk, and other entrepreneurs have announced to invest over a hundred million dollars in developing BCIs (Constine 2017; Levy 2017). BCIs have a high potential for restorative and assistive use in medicine (Glannon 2014a; Glannon 2014b). Remarkably, they enable communication for and with paralyzed patients. In some tragic cases of locked-in patients, BCIs provide the only avenue to communication with the world (Mullin 2017). They can also enable control of prosthetic devices in patients with motor disabilities or amputees, or control external devices like a wheelchair. Further, BCIs can provide neuro-feedback for rehabilitative and neuropsychiatric treatment (e.g., Sreedharan et al. 2013; Lim et al. 2012). In short, BCIs seem to become a groundbreaking neuro-technology that affords restoring and enhancing persons’ capacities for action and communication. As these capacities are key characteristics of human life, crucial for self-determination and well-being, their restoration or enhancement via BCIs may turn out extraordinarily beneficial, especially for patients.
More abstractly, BCIs create novel ways for humans to interact with computers and through them with the broader physical and social environment. One of their peculiar features is that they enable changing the external world through thoughts, or rather neuro-psychological processes, without external bodily movements. Thereby, they transcend the need for bodies in motion to affect the world in an unparalleled way. A motionless body, the epitome of inaction, can now dramatically affect the world. Up hereto, every means by which humans affect the world requires moving the envelope of the body, if only slightly, as in speaking or blinking (the special case of non-actions, omissions, notwithstanding). BCIs afford humans, for the first time, to do things “through thoughts,” i.e., without the body as an intermediary.Footnote 1 In neurological terms: The outputs of the central nervous system (CNS) are either neuromuscular or hormonal. A BCI “provides the CNS with new output that is neither neuromuscular nor hormonal” (Wolpaw and Wolpaw 2012, 3). BCIs thus create a new output channel to interact with the world, bypassing the muscular system of the body. This, abstractly speaking, disembodied nature is the first peculiarity of BCI-mediated event we wish to emphasize. It raises a range of intriguing questions for the understanding of human action and confronts the philosophical inquiry into the nature of action with potentially novel forms. These theories of action, in turn, are interesting for those normative accounts of responsibility that are based on agency (Glannon 2014a; Tamburrini 2009; Haselager 2013). Considerations of responsibility, in turn, are of interests to lawmakers who will shortly have to regulate BCIs for market access, and to engineers who wish to design devices free from ethical or regulatory problems. Despite this apparent novel way of initiating changes in the environment and a vast philosophical literature on action, a thorough analysis of BCIs in terms of agency is so far lacking. We wish to ameliorate this gap. In the following, we shall explore novelties and peculiarities of BCI-mediated events and emphasize ten of them.Footnote 2 On our way, we are particularly interested in the question whether, and under which conditions, BCI-mediated events qualify as actions. As there is no single concept of action, we will approach this question from three perspectives:
We begin with a subjective “user” perspective, grounded in experience, which asks whether persons experience a sense of agency in BCI-mediated events so that they ascribe them to themselves as their own actions. This is followed by an engagement with philosophical action theory, mainly the so-called standard theory and some more recent refinements. This, to anticipate, will show that some BCI-mediated events are not relevantly different from paradigmatic human actions, so that they should be considered as actions. Others, by contrast, do not qualify as actions at all. Some instances are hard to categorize. They should prompt action theories to refine their accounts. In addition, we argue that legal systems are well advised to overcome “willed bodily movement” accounts and recognize mental actions.
A Typology of BCI Technologies
To begin, let us briefly introduce the technology and draw some relevant distinctions. BCI systems measure brain activity. Electrical brain activity is recorded by electrodes on the scalp, on the cortical surface, or directly in the cortical tissue. Subsequently, signals are amplified and digitalized. Pertinent signal characteristics are extracted, computationally processed and translated into commands that can control applications or external devices. In most cases, the recorded data is used to control devices like prostheses, wheelchairs, or computer software like a cursor or spelling apps (Mak and Wolpaw 2009; Lebedev and Nicolelis 2006). In many designs, these external devices provide some form of feedback to enable BCI users to modify his or her brain activity to reach the desired aims and performances. To highlight implications and importance of BCIs for agency, it is helpful to distinguish three categories of BCIs: active, reactive, and passive BCIs (Zander et al. 2010).Footnote 3
In active BCIs, the user intentionally performs a mental task that produces a certain pattern of brain activity, which the BCI system detects for processing. A commonly deployed mental strategy in active BCIs is motor imagery. The user imagines moving parts of her body, without actually performing the movement. The imagination of the movement of different body parts corresponds to different activations of the primary somatosensory and motor cortical areas. For example, imagining right-hand movement activates a different cortical area and results in different activation patterns than imagining left-hand movement or movement of the foot. These different activation patterns picked up by the BCI and is translated into signals to control an application (for more on motor imagery see Graimann et al. 2009, p. 11–13). For example, via active BCIs participants were able to command a robotic arm orthosis to assist drinking (Looned et al. 2014), to control a robotic gait orthosis (Do et al. 2013), drive a wheelchair (Galan et al. 2008), and control a spelling application (Perdikis et al. 2014). One of the main problems is usability and control of the device, still often cumbersome, effortful, and susceptible to errors.
Instead of decoding motor commands from the motor cortex, Aflalo et al. (2015) decoded the neural signals of intentions of imagined movements as a whole (e.g., “I want to pick up this bottle of water”). With the help of two chips implanted in the brain of a tetraplegic human, the researchers were able to directly read out the intentions to do specific things from the posterior parietal cortex, where the intent to make a movement is formed before the intentions are transmitted to the motor cortex. The read-out of the intention was then translated into the movement of a robotic arm or a cursor. The team could also determine the speed with which the subject wanted to move. This approach makes it easier for people to control devices as compared to concentrating on the specific components of a movement. In the paper, the authors also speculate about the possibility of directly decoding non-motor intentions (e.g., desire to switch on a device) in order to control the environment.
In a reactive BCI, brain activity is modulated in reaction to an external stimulus given by the BCI system (Hohne et al. 2011). A commonly used paradigm is P300-based selection, where stimuli, such as letters or symbols, flash in succession on a screen. The user has to direct her attention to the symbol that she wants to select. The BCI system detects the so-called P300 signal that sets in 300 ms after the stimulus attended to is presented. Based on the P300, the system determines which symbol the user wants to select. A wide variety of controls are possible using selective attention, ranging from writing emails to controlling a TV (see Sellers et al. 2010). Another example of a P300 controlled system is the Brain Painting application that enables locked-in patients to create works of art (Holz et al. 2015). Another example is a tactile BCI. Kaufmann et al. (2014) showed that selective attention to tactile stimulation can be utilized to drive a wheelchair. We wish to note that the distinction between reactive and active, common in the literature, is misleading insofar as the term “reactive” suggests a passive user. As we will elaborate in more detail below, the user is quite active, for example by directing or sustaining attention.
Passive BCIs simply monitor brain activity of the user, without requiring her to perform any mental task. The brain activity that passive systems monitor is not modulated intentionally to achieve a certain goal (later, we shall remark on what distinguishes effects via passive BCI from effects via active or reactive BCI.) Some of the user’s mental states that can be monitored are workload and arousal so that passive BCIs may help to avoid dangerous situations in the workplace by detecting lapses in attention (Martel et al. 2014). Sometimes, the goal in deploying a passive BCI is to improve human-computer interaction (Zander and Kothe 2011). Among the latest developments in the field of passive BCIs are affective BCIs that detect affective states and use this information to improve the interaction of humans and computers. For example, the system might adapt the computer game or task to be performed when it detects that the user is frustrated or bored, by decreasing the level of difficulty or by introducing more engaging elements (Mühl et al. 2014, p. 68).
All BCI systems produce an output that is the input to the application or device that is controlled. The output can command a many different things. Depending on the setup of the application, the BCI can provide more or less detailed input to the “whether, when, and how” of the event. This is particularly important for our purposes as it relates to the kind and amount of control users have over the event. We follow Wolpaw and Wolpaw (2012, 9) here in distinguishing two kinds of output: goal-selection and process-control. In goal-selection, the output of the BCI directs an application or device to a goal, e.g., it commands the robotic arm to “get the cup on the table” or a wheelchair to “drive to the bathroom.” The more fine-grained steps in the execution are carried out by the application. In process-control, users have more control over the execution because they have to give several commands, for instance, “right, right, right” that turns a wheelchair to the right to several degrees. In well-working setups, users would control movements of devices simply by thinking, e.g., about the direction (a drone could be controlled just by imaging flying right or left). At the moment, such a fine-grained control over movement execution seems possible only in simple applications such as gaming. Control of prosthesis, for instance, is often still cumbersome because it requires giving constant commands, especially if several parameters are controlled (multidimensional control, e.g., direction, speed, instant-breaking, and other parameters of a wheelchair). Given the still high failure rate of BCI signal detection, process-control can also become frustrating. Process-control has high demands on users and BCI device. Goal-selection is much easier for users. Wolpaw and Wolpaw draw the apt analogy that goal-selection is similar to a GPS guided autopilot (Wolpaw and Wolpaw 2012, 10). Process-control would then be flying in manual mode. Goal-selection places heavier demands of computation on the controlled application, and will often only work in clearly defined environments with few options. Of course BCIs and applications can also mix these two modes.
One current trend in development is automatization. Through machine learning, devices may learn more about the intentions and goals of the user, with the aim to make predictions about future behavior with fair probabilities and allow for a high grade of automatization. Finally, we wish to note that as of yet, we have mainly considered one-directional systems in which the BCI picks up brain activity and affects events in the external world. However, there are also two-directional so-called closed-loop systems in which the controlled application affects the brain, e.g., by electric stimulation. For instance, a device for psychiatric treatment of depression could monitor mood-levels and adjust them by stimulating the reward center. Although closed-loop systems raise intriguing questions for autonomy, we shall not address their peculiarities here.Footnote 4
Let us now turn to our first central question: Which BCI-mediate events qualify as actions - and which do not? Answers are not only interesting for our understanding of human action, but also for normative issues such as ascribing responsibility. So, what is an action? Unfortunately, no unified account of action exists and many disciplines entertain their own concepts. Thus, we approach the question from three perspectives. We begin by a closer look at the experience of users: Do they consider or experience BCI-mediated events as their own actions? Afterwards, we explore BCI-mediated events in light of the standard theory of action in philosophy. Subsequently, we discuss how BCI-mediated events fare in light of the concept of action in law. Along the way, we will point out in which respects BCI-mediated events are peculiar and interestingly different from familiar paradigmatic forms of human actions.
Subjective Dimension – Sense of Agency
The first relevant perspective is the so-called first-person perspective in which a person experiences some of her movements as her actions. It is said that there is a specific sense of agency, which is either a pre-reflective or a reflective feeling of the subject that she is causing an action (Gallagher 2012, p. 21). This sense of agency seems to be a multifactorial affair, where various cues as contributory factors (Limerick et al. 2014; Synofzik et al. 2008; Synofzik and Vosgerau 2012; Moore and Fletcher 2012; Wegner 2002). Shaun Gallagher (2012) for example claims that feedback loops are relevant, where efferent signals, afferent feedback (perceptual feedback about body movement), and perceptually based intentional feedback (perceptual sense about the effect of the action in the world) are integrated.
A range of studies from social psychology to neuroscience has shown that the experience of agency is based on psychological mechanisms that are prone to errors. The interplay of diverse factors contributing to the sense of agency can lead to either “doing without feeling” and “feeling without doing” (Haselager 2013; Wegner et al. 2004). Predicted and actual sensory feedback, for instance, can mismatch and result in a faulty sense of agency (Sato and Yasuda 2005). For example, persons may feel that they are not in control of a bodily movement because they erroneously attribute it to an external source.
Although errors are rare in paradigmatic actions, it is quite possible that they occur more frequently under the specific conditions of BCI use. Spectacularly, a design in which a BCI system was controlled via motor imagery was able to induce a so-called body ownership transfer, i.e., participants reported a sense of body ownership for a human-like robot (Alimardani et al. 2016).
In another experiment, participants developed a sense of control over a virtual hand although there was no actual causal connection between the BCI and the hand (Bashford et al. 2016). In an experiment with a nonfunctional BCI, which pretended to control the gestures of a virtual hand on a display, it was shown that that the mean vicarious control ratings of participants were high (Vlek et al. (2014).
Potential differences in the sense of agency in BCI use compared to paradigmatic actions could result from differences in the single factors contributing to the sense of agency. Feedback seems to be the most promising candidate to cause mismatches in the sense of agency. For most BCI systems there is some kind of perceptual feedback, especially visual feedback.Footnote 5 People see the object they are moving. Still, there are crucial differences in the feedback processes in BCIs compared to paradigmatic bodily actions. Latency (i.e., time between the input and the subsequent perceived outcome) can differ, which may result in postponed feedback (Limerick et al. 2014). In an experiment, Evans et al. (2015) introduced a systematic delay between the brain signals and the visual consequences (a so-called neuro-visual delay) in a sensory-motor imagery task, which resulted in the decrease of the sense of agency. Due to potential differences in feedback, it is possible that users of BCI experience a different or diminished sense of agency as compared to performers of paradigmatic bodily actions. Whether this is actually the case, however, awaits further empirical investigation.
Another factor that could cause differences in the sense of agency in BCI use, as compared to other human-computer interactions (HCI), is the input modality (Limerick et al. 2014). In ordinary HCIs, the system is controlled either through some external device, e.g., a mouse or a keyboard, or through commands via speech or eye tracking for example. In contrast, for all types of BCIs the same kind of interface is the input modality. Results from HCI research suggest that the sense of agency is increased if the input method is more reliable, i.e., that intended and actual outcome match and consequently some predictability is achieved. Further, research in human-computer interfaces also yielded some indication that the interface design influences the sense of agency of the users (Moore 2016).
Whether or not BCI users experience some BCI-mediated events as their actions or fall prey to misattributions is an empirical question. But there are indications that warrant the speculation that BCI-mediation, at least in some designs, may lead to misattributions in the sense of agency. Users may act, without experiencing the event as their action, or vice versa. This is the second peculiarity of BCI-mediated events. In this regard, we wish to emphasize the potential for fruitful collaborations between sense of agency researchers and HCI-/BCI research. Theoretical and empirical research regarding the sense of agency can provide valuable resources for delineating crucial differences between paradigmatic bodily actions, HCI and BCI-mediated events. Such collaboration may result in BCI designs that increase the sense of agency in BCI users.
Philosophical Action Theory
From the subjective dimension of agency, let us now turn to how philosophical action theory deals with BCI-mediated events. To appreciate the peculiarities of BCI-mediated events, let us introduce two common distinctions of the philosophical literature:
the distinction between basic and non-basic acts and
the distinction between mental action and bodily actions.
Mental Actions, Basic Actions and Non-basic Actions
First, we shall elaborate on the distinction between mental action and bodily action. Thinking about actions usually conjures up images of people intentionally moving their bodies (or parts of them) in space. However, many of the things that humans do intentionally do not involve peripheral bodily movements. For example, we intentionally deliberate about something, try to remember something, or calculate “in our head.” Hence, it seems sensible to distinguish between overt bodily actions and mental actions. Mental actions are actions that do not involve any external movement of the body or limbs (Mele 1997). In other words, in mental actions “[…] there is no motor output to be controlled and no sensory input vector that could be manipulated by bodily movement” (Metzinger 2017, 1) and mental actions bring about a mental event, a mental disposition, or a mental state (Proust 2001).Footnote 6 Despite the differences, there seem to be many similarities between mental and bodily actions. For example, both types of action can be purposeful, goal-directed, inhibited intentionally, or controlled by the agent to some degree. Further, both can include a sense of agency, and both can require motivation, learning and prior experience (Metzinger 2017; Wu 2013; Mele 1997). However, some authors draw a distinction of different forms of mental agency and argue that not all forms of mental actions are similar to bodily actions. For Tillmann Vierkant (2018) and Pamela Hieronymi (2009), the so-called managerial control is the only kind of mental action that is similar to bodily actions. Managerial control is an “intentional mental action with the purpose of creating an environment that will facilitate cognitive activity and the management of attitudes” (Vierkant 2018, 6). Another form of mental agency, cognitive agency, differs from bodily agency because cognitive acts, like judgments, are not intentional.
To shed some light on what mental agency means in the BCI-context, consider the distinction between two kinds of mental agency provided by Thomas Metzinger. There is attentional agency, which is the “ability to control one’s focus of attention”, and there is cognitive agency, which is “the ability to control goal/task-related deliberate thought” (Metzinger 2013, 2). These two kinds of mental agency mirror nicely what is going on in most active and reactive BCIs (see Section 2 above). Remember, the subject has to either direct her attention (reactive BCI) or she has to perform some motor imagery (active BCI), both of which are mental actions in the sense just outlined. Contrast this with passive BCIs where brain activity is monitored but no mental action is required. It is worth pointing out here that the two kinds of mental agency that Metzinger talks about are both intentional and therefore similar to bodily agency. But as we have just seen with reference to Vierkant and Hieronymi, one form of mental agency, namely cognitive agency that includes cognitive acts like judgments, is not intentional. Now, provided that a future BCI would be able to respond to non-intentional mental acts, we would be confronted with a case where a non-intentional mental act directly causes some movement, which has implications for issues of moral agency.Footnote 7 So far as we are aware of, contemporary BCIs do not yet involve this non-intentional mental agency.
Let us now turn to the second distinction: What does it mean to say that an action is basic, while another action is non-basic? Sometimes we do things by means of doing other things. For example, we drive a car by shifting gears and controlling the steering wheel. Based on observations like these, some philosophers draw a distinction between basic and non-basic acts (e.g., Goldman 1970, ch.3.4; Danto 1973). A basic act is an act that you do not do by doing something else.Footnote 8 The paradigmatic example is raising an arm: We simply raise it, without any further steps in between. Consequently, provided the distinction between basic and non-basic acts is sensible, if an event is an action, then it is either a basic action or it is brought about by the performance of a basic action.
Combining this distinction with the one between mental and overt actions yields a particularity of BCI actions. The BCI-mediated event in active and reactive BCIs is a non-basic action (assuming that it is an action at all) because the subject always performs it by doing something else. That is, in order to bring about an external event or movement, the subject has to perform a mental action first (which is regularly a basic act in the previously mentioned sense), which in turn means that the BCI-mediated event in question is a non-basic action. Thus, an action with identical effects—say, an arm reaching for a glass—is a basic action under natural circumstances (that is in non-BCI contexts), but a non-basic action when it is facilitated with a BCI-prosthesis. This is the third peculiarity of BCI actions we wish to highlight.
Standard Theory of Action
But are such BCI-mediated events really actions proper? Many authors simply assume without further questioning that they are actions (e.g., Requarth 2015; Kotchetkov et al. 2010). But this is not as evident as it may appear and leads into intricate debates in action theory. In this section, we will show that according to the popular causal theory of action (e.g., Davidson 1963), some BCI-mediated events qualify as actions,Footnote 9 but not all of them.
What does the so-called standard theory of action say? The standard theory of action is a causal theory,Footnote 10 according to which an event is an action if it is caused, in the right way, by certain mental states and events.Footnote 11 Further, the mental states that cause the action must have a content in virtue of which they rationalize the action.Footnote 12 It is commonly assumed that intentions, beliefs, and desires can play such a role. Further, most authors hold that at least intentions play a crucial causal role in action and that intentions are not reducible to beliefs and desires or a combination thereof (e.g., Bratman 1987).
What are intentions? As one would expect, there is some disagreement on what intentions are exactly. Whatever else is true about intentions, it is plausible to assume that intentions are commitments to action plans (Clarke 2010; Mele 1992). The content of an intention, i.e., the action plan, may be simple or detailed. That is, a person may intend to simply buy milk in the store, or she may intend buying a particular brand by taking a particular route to a specific store.Footnote 13 Further, we can be aware of our intentions when we act but many actions lack conscious intentions (see Lumer 2017). It is not a requirement of the standard theory of action that we are constantly conscious of our intentions.
Further distinctions regarding intentions have been proposed. Pacherie (2007), in developing John Searle’s ideas on intentions, differentiates between present-directed intentions (P-Intentions) and future directed intentions (F-Intentions). F-Intentions are formed before the onset of the action and are closely linked to deliberation. In contrast to F-Intentions, a P-Intention implements the action plan (inherited from the F-Intention) in a specific situation and entails a more precise representation of the goal. In addition, Pacherie introduces another form of intention that she calls motor-intention (M-Intention). At the level of M-Intentions, perceptual-action contents are transformed into sensory-motor representations. The idea is that M-intentions monitor the bodily movement in a fine-grained way.
Now, consider BCI use in the case of the BCI application Brain Painting that we have described in the introduction. It is sensible to assume here that F-Intention and P-Intention play the same role as in paradigmatic actions. The subject may deliberate to produce a painting and she knows how to achieve this goal. That is, she has an action plan that is then implemented via P-Intentions that involve the mental action of directing attention in order to pick the specific painting tools. The M-Intention on the other hand, is missing in this case. A peculiarity of BCI technology is that motor-intentions do not feature prominently, because in most cases, there simply is no motor-intention because there is no motor representation that guides an intentional bodily movement.
Let us now come back to BCI-mediated events and how the standard theory of action handles them. Consider a scenario of an active or reactive BCI. The subject has to perform a mental action (e.g., motor imagery or selective attention), which is a basic action (as established above), in order to perform the non-basic action of selecting an option from a menu. The causal theory of action holds that this is an action if it is caused in the right way by the right sort of mental states. It is uncontroversial to assume that the subject in our scenario has an intention, in that she is committed to an action plan that includes performing a mental action to achieve a certain goal. The subject also knows that performing the mental action will lead to the desired goal, i.e., selection of a certain option (if the BCI works properly, that is). Further, applying Pacherie’s types of intentions here, it is sensible to assume that F-Intention and P-Intention play the same role in BCI-mediated events as in paradigmatic bodily actions. The F-Intention may be described as something like “Selecting a desired option with the help of the BCI”, while the P-Intention might be described as something like “I want to select the red colored circle so I have to focus my attention on it” (in reactive BCIs) or “I want to select the red colored circle so I imagine moving my thumb” (in active BCIs). So, the occurrence in question (e.g., selection of an option), is caused and rationalized by the right kind of mental state, i.e., an intention, and is therefore an action proper. In these cases, BCI-mediated events are similar to paradigmatic actions in their causal ancestry, and therefore qualify as actions.
By contrast, according to the standard theory, events mediated by passive BCIs do not constitute actions, because the events in question are not caused by the right mental states. This is the fourth peculiarity we wish to highlight: events mediated by passive BCIs are no actions.
Action and Control
However, some authors consider causal accounts of agency of the kind just presented as too coarse-grained. A key objection to the standard view is that it places too much emphasis on the causal history of the event in question and overlooks the capacities of the person in the moment of executing a movement (e.g., Frankfurt 1978). A long debate about deviant or wayward causal chains revolves around this issue. To cut a controversial story short, it seems that sometimes, persons do not act—even though their movements are caused by the right antecedent states such as beliefs, desires, and intentions—because they lack control over the movement. This is relevant for the present purposes because BCI-mediated events seem to differ from ordinary actions with respect to control over the execution of the movement. Control is surely a multifaceted concept, especially when applied to the mental realm, and too broad to fully unpack here. However, Pacherie (2007) provides a useful distinction between guidance and executive control and Shepherd (2015) also distinguishes between the executive aspects and the implementational aspects of an action. The first concerns such things as planning and plan revision, whereas the second concerns the establishment of a state of affairs that satisfies a goal state. In line with these distinctions, we suggest distinguishing executory, guidance, and veto control and shall apply them to BCI use.
Executory Control and Libet Experiments
People have many desires, beliefs, and intentions on which they do not act. Something additional has to come in to realize such intentions: an executory command. Often, it is called a volition. But because the term is controversial and ambiguous, we rather speak of an executory command.Footnote 14 From a commonsensical perspective, the idea seems plausible: Sometimes, it occurs to us as if we give a conscious command, a go-signal, to initiate an action, and that it causally executes the action. Think about a camera-woman who carefully observes a landscape and adjusts her equipment until she thinks “This is the perfect moment” and starts the recording. There seems to be a clear executory command that follows a conscious appreciation of the moment (even though the decision was influenced by external stimuli). However, we often move our bodies without consciously giving any such command. Most of the time, we just act. In fact, there seems to be an entire subset of actions—automatisms—that we perform in the absence of such executory commands (we will turn to automatism below). A surprising peculiarity of reactive and active BCIs is that users have to give a go-command via the respective mental action. The device picks up the underlying signal and translates it into movements. In such applications, users give a go-command in a much more pronounced manner—namely via a conscious and effortful mental action—than we usually give go-commands in many ordinary actions. This is the fifth peculiarity of BCI actions worth highlighting.
Interestingly, this difference might bear relevance for the discussion about the implications of experiments by Libet (Libet 1985, 1999) or Wegner (2002). Without rehearsing them here, one of the worries they stirred is that the subjective experience of movement initiation, the executory command, might not “cause the action,” as it is neither causally efficacious, nor necessary for movement execution. Instead of the experience of “consciously willing to initiate a movement,” antecedent unconscious processes detectable in the motor cortex (“readiness potential”) have initiated the movement. The experience of initiating the movement then turns out to be misleading (The role of the conscious experience is controversial. Perhaps it is epiphenomenal; perhaps it comes with a veto-power). Interestingly, and irrespective of one’s position on Libet’s experiments, the initiation of movements via BCIs would not fall prey to such worries, because respective unconscious preparatory processes in the motor cortex (and related brain areas) cannot initiate the movement and remain causally irrelevant, because the output via the neuromuscular system is not used. The BCI commands applications only if it detects the signals correlative of the respective mental act. This requires a conscious and effortful performance of a mental act. BCI-movements thus preserve the causal efficacy of a conscious command (such as mental imagery). In this regard, BCI actions may come closer than ordinary actions to the ideal of consciously initiated action.Footnote 15
Executory control over movement initiation is just one form of control. A famous case example in action theory can be explained by its absence:
Bob desires and intends to shoot the sheriff, but this makes him nervous and causes his finger to cramp, which in turn causes the trigger to be pulled, resulting in the gun being fired and the sheriff being shot.
Did Bob shoot the sheriff? Although the shooting was caused by the correct mental states, it seems that it was not caused in the right way. Instead of the executory command, there was an unintentional control-undermining element at play here—nervousness—that initiated the movement.Footnote 16 Therefore, the shooting does not qualify as an action according to the standard theory of action. Such cases illustrate that executory control is an important element of actions (whether it is necessary is a question we will address in a moment).
Whereas agents with active BCIs have executory control in the way just mentioned, other BCI designs can initiate events without executory commands. In the standard protocols, one option is that the BCI initiates events in an application without a command by the user (Wolpaw and Wolpaw 2012). One line of current research focuses on BCIs that read out brain states to predict users’ movement intentions (Wang and Jung 2011). Through machine learning, BCIs may adapt to plans or intentions of the users, bypassing the cumbersome process of them giving commands via mental imagery for example. Engineers hope that this will lead to a much smoother user experience. Actually, it might be a requirement for the practical success of restorative BCIs. However, if BCI systems initiate movements without any user command, or by reading out users’ preferences or more distal intentions, users do not have executory control. This leads to the sixth peculiarity of BCIs, that it may initiate movements by predictive interpretation of brain signals. This has the somehow paradoxical effect that engineering BCI systems that work best for users, especially patients, may work well precisely by undermining features such as initiation that are central to concepts of action.
There are further forms of control. Sometimes, after initiation, people have control over the ensuing execution of movements, which is the ability to alter and influence the execution of movements, such as the trajectory of a limb movement. Let us call this kind of control guidance control. Guidance control is often limited. For instance, we are not aware of the many muscles and their contractions that are necessary to raise an arm, let alone to perform more sophisticated actions such as skiing. Overall, humans lack fine-grained muscular control. Nonetheless, through various feedback channels, we monitor the progression of our movements and are able to adjust them, i.e., we could get them under conscious control, although only on general levels.
In most BCI applications, however, users do not possess guidance control to a relevantly similar degree. The signals detected by the BCI are processed and transformed by the machine in operations that are inaccessible to the user. Because the BCI is like a black-box, users do not have guidance control over those parts of the causal chain that are processed and executed by the machine. Users may only influence the unfolding of events controlled by the machine by sending another signal, e.g., redirecting the process, if the technical design affords so. Much depends on the precise properties of the technical design. Here, the distinction between goal-selection and process-control becomes relevant. In goal-selection (“autopilot”), users lack guidance control. In process-control, they possess guidance control insofar as they can direct the events controlled by the application. This might be more or less fine-grained, depending on the technical setup. In the future, more sophisticated BCIs might be able to provide fine-grained process-control along with multisensory feedback, so that they are more integrated with the minds of users. The amount of guidance control might then resemble ordinary guidance control of healthy adults. So, we wish to note the seventh peculiarity: in BCI designs with goal selection, users lack any guidance control. In process-control, users may possess some.
The last form of control we wish to distinguish is veto control, the power to stop the further execution of a movement. Evidently an important form of control, users of reactive BCIs possess it insofar as they can simply discontinue sustaining attention. This will stop the movement. The same might be possible through mental actions in active BCIs. In addition, further technical options to stop the operation of the BCI should be implemented. In the best case, users have the power to halt BCI-mediated operations at any time through some other output channel. Thus, we call upon BCI engineers to design sufficient ways and options to provide stop-signals.Footnote 17
We have introduced three forms of control: executory, guidance, and veto control. The question is which of them is necessary or sufficient for a movement to qualify as an action. We presuppose, in accordance with arguments in the literature outlined above, that some form of control is required. But which? We suggest either executory or guidance control. If a person exerts at least one of them, the movement may constitute an action (other conditions notwithstanding). What about veto control? This is important because that is the only form of control many BCI designs will leave users. Future BCI designs will operate by reading out P-intentions or other mental states to predict what users may want to do and initiate the action. Users then have neither executory nor guidance control. Are such BCI-mediated events actions? Whether it suffices in the absence of executory or guidance control is an interesting question that brings us to the gray areas of the concept of action. Is a movement that a person neither initiates, nor controls during execution, an action, if she could stop it?
This question has parallels to another gray area in the investigation of agency—automatism. Automatisms are movements that are not initiated by a conscious executory command, nor do persons exert conscious guidance over them. Classic examples include movements in sports or driving a car. Driving a car requires constant movements such as shifting gears, breaking, accelerating. Most of these movements are executed without any conscious control; they are often even outside of awareness of the driver. They are more or less initiated and guided by unconscious mechanisms. Given the prevalence of automaticity in daily life (Bargh and Chartrand 1999) the question arises whether such automatic behavior are actions or not.Footnote 18 For causal theorists, the answer depends on whether the behavior is non-deviantly caused by a corresponding intention (see Lumer 2017). The behavior does not need the formation of P-intentions according to Pacherie’s taxonomy mentioned above. Suppose a person wishes to drive his car from Paris to Rome, gets in the car, and indeed drives there performing a myriad of automatized movements. It would be strange to deny that the drive is an action. Similarly, many of our daily automatic movements may not be traceable to intentions. Think about the many things we do with our hands all day, e.g., during a conversation. These movements are neither consciously initiated, nor executed. But they could, at any time, be brought under conscious control by attending to it. They are potentially controllable. Is this sufficient? Are such subconsciously initiated and guided movements actions, in virtue of them being potentially consciously control- and stoppable? We wish to suggest that potential control is not enough. These events are thus no actions (they might be omissions). In any case, we can note that such movements are located at the fuzzy borders of the concept of agency. BCI-mediated events will often be set in this gray area between actions, omissions and mere movements. In most current BCI applications, there is no or very little guidance and veto control. The question whether veto control is sufficient to qualify the event as an action might become more relevant when passive BCIs become more common and are equipped with an active veto control option.
So far, we have addressed BCI-mediated events from two vantage points: Sense of agency (i.e., the subjective dimension) and philosophical action theory. We shall now focus on BCI-mediated events vis-à-vis the concept of action in law, which allows highlighting their disembodied nature.
Disembodiment & Legal Concepts of Action
One of the most fascinating and intriguing features of BCI-mediated events is their disembodied nature (Nicolelis 2011; Himmelreich 2016). This nicely contrasts with a legal perspective on actions. Whenever one speaks of the law in a generic way, it should be reminded that there is not one unified system of law, but diverging legal systems, so the following might not apply to every jurisdiction to the same extent. But at least two well-established legal systems from civil- and common law traditions converge on aspects relevant here. Generally, the main mode of operation of the law is to pre- or proscribe actions. Both the US Model Penal Code and German Criminal Code stipulate an act requirement, i.e., offenses require an act (or the omission of an act). Both systems also converge on similar notions of what amounts to an act. The Model Penal Code defines an “act or action” as “a bodily movement” (§ 1.13,2). Subsequently, it clarifies that, among others, reflexes, convulsions, movements during unconsciousness or sleep, or bodily movements that are not the product of effort or determination of the actor are involuntary. Put positively, the law requires a voluntary – in the sense of willed – bodily movement (in Moore’s (2010) reading: A bodily movement caused by a volition). In a century-long debate, German legal scholarship has come up with a similar and concededly somewhat pale concept of action as a ‘bodily movement controlled by the will’.Footnote 19 Despite ambiguities regarding concepts such as ‘being caused’ and ‘controlled’, the essence of their concepts of action is in both systems the same (Fletcher 2007). Importantly, this concept implies, conversely, that mental actions are no actions proper.
These legal concepts of action, relevant for ascriptions of responsibility, stand in contrast to changes in the world brought about through BCIs. BCIs transcend the human body because the brain signals can causally affect a variety of external devices. BCIs enable persons to affect the external world while their bodies remain motionless. In fact, whether their bodies move in space is causally irrelevant for bringing about the desired BCI-mediated effect. As far as we can conceive, BCIs afford for the first time to actively (or causally) bring about changes in the world without moving one’s body (all communicative acts require at least controlled minimal movements of the bodily envelope such as the blink of an eye).
If the Model Penal code is applied verbatim, the nulla poena sine lege (no penalty without a law) principle obliges criminal courts to do so, effects caused in the external world via BCIs would not count as actions—even if carried out intentionally. Such an exclusion of liability for BCI-mediated changes in the world appears prima facie unreasonable: It were unjust to refrain from holding BCI users to abide by the norms without substantive reasons and punish them for misdeed, and it were also unfair with respect to those (potentially) harmed by such movements to bar them from seeking redress.Footnote 20 Even more: If the law were to adapt such a no-liability position, it would create a strong reason for banning the use of BCIs, as it would violate the basic idea that adding new risks to the world is justifiable only if someone takes responsibility in case of their realization. The narrow legal view seems therefore prima facie unwarranted. BCI-mediated events thus provide an illustrative scenario that suggests reforming the ‘willed bodily movement’ doctrine of the law.
But before legal statutes have to be amended, one may try to re-interpret the act requirement in light of these new technological developments. Especially the BCIs currently developed in restorative medicine, such as arm or limb prostheses, resemble and mimic functions of natural human body parts. Users may even conceive of them as parts of their body. We wish to call them quasi-bodily devices. With some abstraction, one might conceive of their movements as bodily movements, and thus, at least legally, understand the act requirement to include quasi-bodily devices. Note, however, how this departs from ordinary language ascriptions. It implies, for instance, claiming that a locked-in patient, who cannot move her body but nevertheless steer a prosthesis via BCI, has moved her body in a legal sense. Other applications, by contrast, afford controlling entities that are not directly connected to the body, such as the Brain Painting computer software, or machines or devices that are physically located elsewhere. Considering the movement of a machine, which is physically located in a different place than the biological body of the user, as the bodily movement of the agent seems to overstretch possible meanings of “bodily movement,” especially in light of criminal law’s demands of strictness and definiteness. Therefore, BCI-mediated events through devices that are not quasi-bodily are not actions in the legal sense. Notably, this form of action differs from what one may call acting at a distance through artifacts. One may speak of a pilot that kills persons in Afghanistan through a drone, which he directs via a joystick, even though he sits in front of computer directing in the drone in the USA. The mere fact that actions can have effects in places far away from the location of the movement does not call in question the nature of the movement as an action. The difference in the BCI case is that there is no movement independent from the device in the first place.
If devices were considered as parts of the body, rather than external props to it, the concept of the body is transformed into something differently that becomes hard to distinguish from artifacts or machines. Perhaps, prosthesis, BCIs, and other technologies of disembodiment prompt such a postmodern reconceptualization of the body and its boundaries, and perhaps we will conceive of bodies extending in space (and time) over and above its “natural” limits (even into virtual worlds) at some point. But presently, and for the purposes of ascribing (legal) responsibility, “bodily movement” is a local phenomenon that requires physical movement of the body and cannot be understood to encompass the movements of devices clearly physically separated from it. Capturing movements of such (non-quasi-bodily) devices by legal concepts of actions then necessitates reforming the legal concept. Thus, this is the eighth peculiarity: BCI-mediated events that do not involve quasi-bodily devices are not actions in the current legal sense (but might be so according to a reformed concept of action in the law).
Bodily Movements and Tools
But before we explore a reform of the legal concept a bit further, let us bring the legal perspective in a dialog with philosophical perspectives on disembodied actions. The philosophical debate regarding actions begins with bodily movements and traditionally the key question of the debate is what differentiates actions from mere bodily movements. For instance, Davidson, the founding father of the causal theory of action, identified actions with bodily movements (Stout 2006, 143 ff.; Ford 2016). Accordingly, most authors take for granted that bodily movement is the subject of inquiry. Ford (2016) aptly calls the view that identifies actions with bodily movements corporealism. According to corporealism, actions never temporally and spatially extend beyond the movements of the body (see also Stout 2006, 144). However, despite Davidson’s insistence that actions are nothing more than movements of the body, it needs to be stressed that there is no need for proponents of the causal theory of action to make it a requirement that actions are bodily movements. According to the causal theory, an occurrence (or event) is an action when it is appropriately caused by the right events. That is, the features that make the event an action are its antecedent causes.Footnote 21 So, the lack of any overt bodily movement in BCI-mediated events does not automatically disqualify them from constituting an action.
One of the alternatives to the standard theory is of particular interest with respect to embodiment and action. According to Hornsby’s (1980) “trying theory of action,” the movement of the body is not an essential feature of actions. What is essential, however, is the trying. On the most basic level, so Hornsby, an action is the event of trying to do something, which may, or may not, then result in bodily movement. Every action is an act of trying. A corollary of her view is that not all but only some acts of trying are acts of trying to move your body. In light of the trying theory, there is nothing special regarding BCI-mediated events, so that they constitute actions.
So, there are no inherent reasons to disqualify BCI-mediated events as actions. A reason for the reluctance to conceive BCI-mediated events as actions might be found in the fact that the use of instruments has not played a prominent role in classic philosophical action theory. Under the sway of corporealism, it concentrated on the bodily movements that direct a tool, or, as in the case of Hornsby, actions are identified as inner occurrences of trying. In both instances, tools and instruments are in a blind spot. This neglect of instruments may well be a shortcoming (see Ford 2016), especially in cases where people make use of tools without bodily movements—by doing things through thoughts. Depending on the level of abstraction, the narrow focus on bodily movements might seem misleading. Tools and the materials, upon which our movements exert effects, are often essential parts of the action description. When it comes to acting with or upon things that are external to the body, a description of our bodily movement that is stripped from any reference to the external is not primary but is derived from a description of interacting with something other than myself (Ford 2016, 10f.).
However, it would likewise be superficial to conceive of BCIs as simply another tool. This would neglect their peculiar role in the action sequence, the bypassing of the muscular system and the body. When persons use tools to act through them on, e.g., other objects, a three-part relation emerges: The bodily movement directs a tool that affects another object. The bodily movements through which the tool is used can regularly be separated, at least for analytic purposes, from the movements of the tool, even when persons have become so accustomed to their interaction with a tool that they may experience the tool like a continuation of their body. But for the lack of bodily movement, there is no equivalent relation in BCI use. Ordinary controlling of tools requires bodily movements, controlling them through BCIs does not. So, it is not the tool use that is novel, but the way the tool is used. And though one may say that BCIs are also tools, regarding them as nothing but a tool fails to account for their peculiar disembodied nature. Some, such as Miguel Nicolelis, a pioneer of BCI-development, consider this to be the “liberation of the human brain from the human body” that may unleash a set of developments currently impossible because of biological constraints (Nicolelis 2011, 315). But whether the disembodied nature is relevant, and in which sense, is another question that should not blur the descriptive analysis. For our present purpose of informing the law, we are only interested in the question whether disembodied BCI actions should (not) count as actions proper. As we have argued in this section, causal theorist and trying theorist alike have no reason to deny that BCI-mediated events can constitute actions. Philosophical views across the spectrum thus speak in favor of including BCI-mediated events as actions. At least, they do not provide principled objections against such a view.
Implications for the Law and Freedom of Thought
The interdisciplinary input from philosophy seems helpful for the law to grasp the nature of novel technologies and regulate them. It clears the way for the law to broaden its concept of action. However, we should note that the aims of the law as a normative enterprise and those of action theory are not identical. Some legal reasons speak against expanding the concept of action, and we have to briefly address them. For one, changing the law is not as easy as it may appear. Especially changing a general concept such as the one of “action” affects a multitude of diverse (and unforeseeable) scenarios. Furthermore the ratio legis, the rationale underlying the concept of action, merits attention. BCI movements in fact reveal some quite fundamental assumptions of the legal system, the distinction between governmental or societal regulation of outward behavior and internal mental states. The mental realm, from active thinking to passively occurring emotions or other psychological processes, is left out of the domain of legitimate governmental regulation. Good reasons speak in favor of restricting legal regulation—and personal responsibility—to outward actions, as opposed to mental acts. The key motivation for this is the right to freedom of thought. Its idea is that governments should have no say over the minds of citizens. This leads to a self-restraint of the law, which was already recognized in a maxim of Roman law cogitationis poenam nemo patitur (no one shall be punished for thoughts). As a consequence, the law does not prohibit or proscribe mental actions.Footnote 22 Evidently, changing the scope of legal regulation in such a general point may have repercussions in other fields of the law and on the relation between the state and citizens. We suppose that especially because of the danger of overbearing state powers, calls for jettisoning the act requirement and expanding the scope of criminal law have not found much resonance in legal scholarship. Let us look at this more concretely.
The novel form of action that active and reactive BCIs afford may be best characterized as a hybrid of mental and bodily action without involving the muscular system. A mental action triggers a causal chain, which in turn brings about (intended or merely foreseen) external effects. Accordingly, to capture BCI-mediated events, the law has to discard the bodily movement requirement and complement and expand its concept of action, presumably along the following lines: Acts include “mental acts that produce an effect in the external world.” Accompanied by the appropriate mens rea, e.g., the intention to bring about a prohibited state of affairs, performing that mental act could constitute a punishable offense. However, such an enlargement might cause problems for the law in very different cases. For one, the moment of action, relevant for several legal issues such as attempts, might now shift in cases in which sometime passes between the mental act and the effect in the world. Moreover, expanding the class of actions may turn events that were outside of the scope of the law into legally relevant actions. Here is an example: Suppose a person entertains sexual thoughts about another person, who is, for some reason, aware of this. Suppose the latter feels very uncomfortable because of this. If these effects are foreseeable for the thinker, thinking such sexual thoughts would then qualify as a “mental action with effects in the external world”, and possibly fulfill the elements of an offense. One may appreciate or reject such an outcome, but it is clearly an expansion of the scope of the law that has to be analyzed and debated thoroughly. Additional legal domains could be affected (Duff 2004), such as bodily movements that are not under full control during performance, but were triggered by a mental act (automatisms). Given the occasionally hairsplitting distinctions of the law, an expansion could also impact different topics such as the legal view on collective actions (Himmelreich 2016).Footnote 23
But there is another important change that this expansion implies: BCI users give up the protection of freedom of thought for those mental acts which can produce external effects via a foreseeable (and further to be defined) causal chain.Footnote 24 This is our ninth (and normative) peculiarity of BCI actions. To put it more provocatively: One may say that the law would, for the first time, prohibit thinking particular thoughts or performing particular mental actions (those that bring about further external effects). As long as the law was confined to bodily movements (and omissions), people could think what they want. This change would be a remarkable novelty. As persons are quite unaccustomed to being held responsible for mental actions, BCI users have to be informed about these potential liabilities. Furthermore, this legal liability has limits. In analogy to bodily movements, we suggest that persons have to have sufficient control over bringing about the respective event. The distinction to which we earlier alluded between mental occurrences and mental actions then becomes legally relevant. The first category comprises mental events that happen to a person, rather passively, such as when thoughts or images suddenly pop up. These events may be detected by BCIs and further events are set in motion but the person did not perform a mental act. For example, this is the case for passive BCIs that react to uncontrollable psychological processes that do not constitute actions.Footnote 25 Mental actions, such as motor imagery, by contrast, comprise acts over which persons exert some amount of control. This distinction between controlled and non-controlled mental processes which separates mental actions from non-actions is surely interesting from several perspectives and may well need further refinement for normative purposes. For instance, the idea of an executive command (or volitional act) to think particular thoughts seems different to the executive command to move one’s limbs. Moreover, the (required) amount of control has to be rendered more precise. For instance, as research on thought suppression indicates (e.g., Wenzlaff and Wegner 2000), a, yet hypothetical, legal duty to not think particular thoughts may be quite hard for people. This might be a price worth paying, but it at least merits discussions between legal scholars, philosophers, BCI engineers, affected persons, and policy makers. At least, it requires some skill in meta-cognition. And this is the tenth and last peculiarity: regular BCI users will (have to) develop novel skills to watch over their own mind.
The discussion between different perspectives on agency also illustrates underlying differences in the approaches by action theory and the law. The law is ultimately interested in ascribing and distributing responsibility for events in the world. The idea of actions thus play a functional role in a fair distribution of liability, action therefore requires control in the moment of action. Philosophical action theory, by contrast, seeks defining features of natural kinds. The former tends to analyze potential novel forms in light of responsibility ascriptions; the latter tends to address ontological issues. The subjective perspective is influenced by sensory mechanisms and may thus diverge from the other ones. Consequently, there is no unified anatomy of an action. BCIs and the merging of minds with machines may thus provoke different answers from normative and metaphysically oriented approaches. The analysis of agency and BCI use may provide the foundation for further legal and ethical discussions about issues like autonomy, privacy, responsibility, or questions of legitimate human enhancement in BCI use.
In conclusion, let us recap our results. The theoretically most important aspect of BCI is that they provide a novel output channel that bypasses the muscular system of the body. This grounds the disembodied nature of BCI-mediated event (first peculiarity). For discussions of agency, we suggest distinguishing active, reactive, and passive BCIs. But are movements mediated by these devices actions? Users, from their subjective perspective, may not attribute movements as actions to themselves if they do not experience a feeling or a sense of agency, which in turn might be absent because an executory command or feedback is lacking. Although more data is needed, it is likely that users may misattribute BCI-mediated events and do not consider themselves as agents although they in fact did initiate or control the unfolding of the events, or vice versa (second peculiarity).
Through the lens of philosophical action theory, BCI-mediated events fare differently. One peculiarity is that BCI-mediated events are non-basic actions (third peculiarity). But are they all actions? Some are actions according to the standard causal theory because they possess the right antecedents. When it comes to passive BCIs, however, there are no actions insofar as passive BCIs cause events even in the absence of intentions (fourth peculiarity). Furthermore, action theories premised on control have to draw finer distinctions between forms of control, for which we suggest conscious executory control, conscious guidance control, and veto control. A BCI system may confer one but not the others. In some designs, BCIs require a much more pronounced executory control command (fifth peculiarity). In others, the BCI may conversely initiate events without any input by users (sixth peculiarity). Furthermore, users usually have none (goal-selection) or only restricted guidance control, depending on technical questions (seventh peculiarity). While possessing either executory or guidance control seem to suffice for actions, the challenging case is their absence, with the potential veto control only. This is a general characteristic of automatic behavior. To assess such movements, many action theories would have to provide more detailed criteria. We wish to suggest that movements that are neither initiated consciously, nor guided during the execution phase, do not qualify as actions. As a side note, the lack of executory control, often lamented as a blow to responsibility in debates over the Libet experiments, could be overcome by active BCI design, which would ensure the inability of unconscious movement initiation.
A striking peculiarity of BCI-mediated events is their disembodied nature. People can affect the world without moving, an unparalleled form of interaction between humans and the world. The disembodied nature challenges legal concepts of agency that draw upon “willed bodily movement.” According to them, BCI-mediated events are no actions in the legal sense (eighth peculiarity). To capture them, the law would have to develop a more inclusive concept of agency, capable of grasping mental actions with external effects. This, however, would lead to some tensions with the idea of freedom thought. Users of BCIs would have to accept limitations of their freedom to think what they want, insofar as this enables them to affect rights of others (ninth peculiarity) These novel duties to monitor one’s own mental state, and the function of BCIs in general, may lead users to develop meta-cognition (this is a tenth peculiarity). After all, BCI technologies that provide access to the mind in a novel way thus prompt reconsidering venerable legal doctrines. This is no surprise—ultimately, the purview of the law ends where the social sphere ends. If it expands into the mind through neurotechnologies, normative boundaries have to be redrawn as well.
Doing things with thoughts surely neither implies that no other physical intermediary such as a prosthesis is necessary, nor that thoughts do not play a causal role in ordinary bodily actions. We wish to thank an anonymous reviewer of the journal for this and other valuable suggestions.
Henceforth, we shall use the term “BCI-mediated events” because it is more inclusive than alternative terms like “movement,” because events include changes of the BCI system as well as overt bodily movements that are brought about via BCI technology. Further, the term ‘movement’ may encourage the idea that we limit the scope of the paper is limited to bodily movements. This is not the case, however. For reasons that will become apparent in later sections, we explicitly want to include events that do not involve bodily movements.
We are aware that some view this tripartite distinctions with skepticism, because the terms “active,” “reactive,” and “passive” are “subjective” and “lack clear neuroscientific definitions” (Wolpaw and Wolpaw 2012, 6). While this may be true, most concepts of agency have some subjective residues, be this experience, mental states such as reasons, or processes such as mental actions, because of which it might not be fully translatable into neurological terms alone.
For some moral and legal implications of closed-loop systems see Kellmeyer et al. 2016.
There has been some philosophical debate regarding mental actions, especially concerning the question what mental events should be included in this category. Some authors, like Galen Strawson (2003) for example, hold that most of our mental activities are merely occurrences that happen to us. Strawson restricts mental actions to the prefatory stage that creates the conditions for thoughts to occur. Examples of these stage-setting mental actions are calming a wandering mind or stopping a train of thought. Here is Strawson on mental action: “Mental action in thinking is restricted to the fostering of conditions hospitable to contents coming to mind. The coming to mind itself—the actual occurrence of thoughts, conscious or non-conscious—is not a matter of action” (Strawson 2003, 234).
We want to extend our gratitude to an anonymous reviewer for making helpful suggestions regarding this distinction.
Please note that not everyone agrees that there must be basic acts, see Lavin (2013).
Note that because of its popularity, the causal theory of action is sometimes called the standard theory of action and we will use both notions synonymously here. Also, we cannot rehearse the many intricacies of the debate of the standard theory here, but will point to some aspects with respect to control shortly.
To be precise, the so-called standard theory is an event-causal theory of action. There are other causal theories of action that are not event-causal. The agent-causal approach for example holds that agents qua substances are the not-determined cause of an action (e.g., see O’Connor 2002). However, the agent-causal approach is widely rejected because it relies on a dubious notion of substance causation.
The phrase “the right way” refers to non-deviant causation but we will come back to this later.
Schlosser (2011) for example argues that the causal theory consists of two parts. First, there is the causal account of the nature of action that holds that an event is an action when it is caused in the right way by the right kind of mental states. Second, there is the causal theory of reason explanation that claims that the mental states of the agent that causally explain the action also rationalize it. Rationalization here means to provide reasons for an action, not as commonly used in psychology, as post hoc rationalizations.
Here is Mele (1992) on intentions: “Intentions are executive states whose primary function is to bring the world into conformity with intention-embedded plans” (162). And further: “The representational element [of an intention] incorporates what we may call an action plan, or simply a plan. Plans need not be complex, detailed, or explicitly entertained” (109).
One of the most influential objectors to the idea of volitions as executive acts is Gilbert Ryle (1949).
One might read the Libet experiments to entail a more comprehensive claim, namely that all decisions, not only executory commands, are pre-determined by unconscious processes. BCIs could not overcome this more general worry. Whether such far-ranging claims are warranted by the experiments of Libet or his successors is of course quite controversial. Until recently, the studies seem to have measured signals from the motor areas of the brain only. This may indicate preparation of motor execution, but not of other, more downstream processes such as deciding to act. As far as we see, only some recent studies have found evidence for a signal preceding decisions and mental actions, e.g. Soon et al. (2013). Our present argument is restricted to the original problem pertaining to unconscious preparation of movement execution before the person forms a volition to execute them. Such preparation cannot become effective in active or reactive BCI-designs.
Here is Schlosser’s (2010, 299) more formal rendering of such a basic deviance: “There is an agent’s mental state or event, R, that rationalizes the performance of an action of type A. R causes an intermediary state, N, which causes a movement of type M. This movement would constitute or realize an instance of A-ing, were it not caused by N. Typically, N is a state that appears to undermine the agent’s control over the movement, such as a state of severe nervousness, and so it seems clear that the agent does not perform an action at all. But according to a causal theory, the agent performs an A-ing, as M is both caused and rationalized by R.”
In a recent paper Clausen et al. (2017) propose that every semi-autonomous system should feature the possibility of a veto control.
Please not that an action can be both automatic and controlled at the same time. However, an action cannot be automatic and agentively controlled with respect to the same feature at the same time (see Wu 2013).
We also wish to add that one may draw a finer distinction between criminal and civil liability, as both could be grounded on different concepts of actions. But this is a contingent matter that can only be addressed with respect to a particular jurisdiction. Furthermore, depending on the circumstances of the case, BCI users might be held responsible for omissions (provided they could have performed a different act that had stopped the BCI-mediated event to occur), or for acting negligently by making use of the BCI system, if occurrence of the BCI-mediated event was foreseeable at the moment of action. But these subtle differences in forms of criminal responsibility do not undermine our main point: BCI-mediated actions are no bodily actions in the sense of criminal law.
For our purpose here, it is not necessary that we address the case of omissions, which are of course lacking an external change of the world.
Although the inner and the outer have never been delineated precisely, mental actions as imagination are prime examples of the sphere of the mind, which should be left unregulated.
Alternatively, the legal concept of action could be construed as “willed physical movements.” This would capture BCI-mediated events. However, it might cause different problems. For one, it would not include negligent actions. Suppose a user knows that mental action Phi causes a specific outcome. She phis but not because she wants to produce that outcome but merely because she is negligent. This would not count as a “willed physical movement.” But it falls under the definition of action suggested by the authors. Furthermore, “willed physical movement” might also be too broad and unable to draw limits. For instance, the CEO of a multinational company sleeping in New York could then be said to act through the machines in factories throughout the world. This definition seems too broad for rather fine-grained legal ascriptions of responsibility, but further legal discussion is required here (Again, we wish to thank a reviewer for this suggestion).
See Duff (2004) for a long (and favorable) discussion of giving up the act requirement.
This does not necessarily mean that persons cannot be held responsible for effects brought about through BCIs in the absence of action (or through a passive BCI).
Aflalo, T., Kellis, S., Klaes, C., Lee, B., Shi, Y., Pejsa, K., et al. (2015). Decoding motor imagery from the posterior parietal cortex of a tetraplegic human. Science, 348(6237), 906–910. https://doi.org/10.1126/science.aaa5417.
Ahn, M., Lee, M., Choi, J., & Jun, S. (2014). A review of brain-computer interface games and an opinion survey from researchers, developers and users. Sensors, 14(8), 14601–14633. https://doi.org/10.3390/s140814601.
Alimardani, M., Nishio, S., & Ishiguro, H. (2016). Removal of proprioception by BCI raises a stronger body ownership illusion in control of a humanlike robot. Scientific Reports, 6(1).
Bargh, J. A., & Chartrand, T. L. (1999). The unbearable automaticity of being. American Psychologist, 54, 462–479.
Bashford, L., Mehring, C., & Serino, A., (2016). Ownership and Agency of an Independent Supernumerary Hand Induced by an Imitation Brain-Computer Interface. PLoS One, 11(6):e0156591
Bratman, M. E. (1987). Intention, plans and practical reason. Cambridge: Harvard University Press.
Bublitz, C. (2016). Der rechtliche Handlungsbegriff. In M. Kühler & M. Rüther (Eds.), Handbuch der Handlungstheorie (pp. 389–393). Stuttgart: Metzler.
Clarke, A. (2010). Skilled activity and the causal theory of action. Philosophy and Phenomenological Research, 80(3), 523–550.
Clausen, J., Fetz, E., Donoghue, J., Ushiba, J., Spörhase, U., Chandler, J., et al. (2017). Help, hope, and hype: ethical dimensions of neuroprosthetics. Science, 356(6345), 1338–1339.
Constine, J. (2017). Facebook is building brain-computer interfaces for typing and skin-hearing, TechCrunch, April 19, 2017, Web: https://techcrunch.com/2017/04/19/facebook-brain-interface/ (Accessed Sep 4, 2017).
Danto, A. C. (1973). Analytic philosophy of action. Cambridge: Cambridge University Press.
Davidson, D. (2001, orig. 1963). Actions, reasons, and causes. In D. Davidson (Ed.), Essays on actions and events (2nd ed.) (pp. 3–19). Oxford: University Press.
Do, A. H., Wang, P. T., King, C. E., Chun, S. N., & Nenadic, Z. (2013). Brain-computer interface controlled robotic gait orthosis. Journal of Neuroengineering and Rehabilitation, 10, 111. https://doi.org/10.1186/1743-0003-10-111.
Duff, A. (2004). Action, the act requirement and criminal liability. Royal Institute of Philosophy Supplements, 55, 69–103.
Evans, N., Gale, S., Schurger, A., & Blanke, O. (2015). Visual feedback dominates the sense of agency for brain-machine actions. PLoS One, 10(6), e0130019. https://doi.org/10.1371/journal.pone.0130019.
Fletcher, G. P. (2007). The grammar of criminal law: American, comparative, and international: volume one: foundations. Oxford: Oxford University Press.
Ford, A. (2016). The Province of Human Agency. NoÃ»s
Frankfurt, H. G. (1978). The Problem of Action, American Philosophical Quarterly, 15(2), 157–62.
Galan, F., Nuttin, M., Lew, E., Ferrez, P. W., Vanacker, G., Philips, J., & Millan Jdel, R. (2008). A brain-actuated wheelchair: asynchronous and non-invasive brain-computer interfaces for continuous control of robots. Clinical Neurophysiology, 119(9), 2159–2169. https://doi.org/10.1016/j.clinph.2008.06.001.
Gallagher, S. (2012). Multiple aspects in the sense of agency. New Ideas in Psychology, 30(1), 15–31. https://doi.org/10.1016/j.newideapsych.2010.03.003.
Glannon, W. (2014a). Neuromodulation, agency and autonomy. Brain Topography, 27(1), 46–54. https://doi.org/10.1007/s10548-012-0269-3.
Glannon, W. (2014b). Prostheses for the will. Frontiers in Systems Neuroscience, 8, 79. https://doi.org/10.3389/fnsys.2014.00079.
Goldman, A. (1970). A theory of human action. Englewood Cliffs: Prentice Hall.
Graimann, B., Allison, B., & Pfurtscheller, G. (2009). Brain–computer interfaces: a gentle introduction. In B. Graimann, G. Pfurtscheller, & B. Allison (Eds.), Brain-computer interfaces. Berlin, Heidelberg: Heidelberg: Springer.
Haselager, P. (2013). Did I do that? Brain–computer interfacing and the sense of agency. Minds and Machines, 23(3), 405–418.
Himmelreich, J. (2016). Agency and embodiment: groups, human–machine interactions, and virtual realities. Ratio. Online First. https://doi.org/10.1111/rati.12158.
Hohne, J., Schreuder, M., Blankertz, B., & Tangermann, M. (2011). A novel 9-class auditory ERP paradigm driving a predictive text entry system. Frontiers in Neuroscience, 5, 99. https://doi.org/10.3389/fnins.2011.00099.
Holz, E. M., Botrel, L., Kaufmann, T., & Kubler, A. (2015). Long-term independent brain-computer interface home use improves quality of life of a patient in the locked-in state: a case study. Archives of Physical Medicine and Rehabilitation, 96(3 Suppl), S16–S26. https://doi.org/10.1016/j.apmr.2014.03.035.
Hornsby, J. (1980). Actions. London: Routledge.
Husak, D. (2006). Rethinking the act requirement. Cardozo Law Review, 28, 2437.
Hieronymi, P. (2009). Two kinds of agency. In L. O’Brien & M. Soteriou (Eds.), Mental actions (pp. 137–162). Oxford: OUP.
Kaufmann, T., Herweg, A., & Kübler, A. (2014). Toward brain-computer interface based wheelchair control utilizing tactually-evoked event-related potentials. Journal of NeuroEngineering and Rehabilitation, 11(1),7
Kellmeyer, P., Cochrane, T., Müller, O., Mitchell, C., Ball, T., Fins, J. J., & Biller-Andorno, N. (2016). The effects of closed-loop medical devices on the autonomy and accountability of persons and systems. Cambridge Quarterly of Healthcare Ethics, 25(04), 623–633. https://doi.org/10.1017/S0963180116000359.
Kotchetkov, I. S., Hwang, B. Y., Appelboom, G., Kellner, C. P., & Connolly, E. S. (2010). Brain-computer interfaces: military, neurosurgical, and ethical perspective. Neurosurgical Focus, 28(5), E25. https://doi.org/10.3171/2010.2.FOCUS1027.
LaFleur, K., Cassady, K., Doud, A., Shades, K., Rogin, E., & He, B. (2013). Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain-computer interface. Journal of Neural Engineering, 10(4), 046003. https://doi.org/10.1088/1741-2560/10/4/046003.
Lavin, D. (2013). Must there be basic action? Noûs, 47(2), 273–301. https://doi.org/10.1111/j.1468-0068.2012.00876.x.
Lebedev, M. A., & Nicolelis, M. A. (2006). Brain-machine interfaces: past, present and future. Trends in Neurosciences, 29(9), 536–546. https://doi.org/10.1016/j.tins.2006.07.004.
Levy, S. (2017). Why you will one day have a chip in your brain. WIRED 07.05.17, https://www.wired.com/story/why-you-will-one-day-have-a-chip-in-your-brain [accessed Feb 2nd, 2018].
Libet, B. (1999). Do we have free will? Journal of Consciousness Studies, 6(8–9), 47–57.
Libet, B. (1985). Unconscious cerebral initiative and the role of conscious will in voluntary action. Behavioral and Brain Sciences, 8.4(1985), 529–539.
Lim, C. G., Lee, T. S., Guan, C., Fung, D. S., Zhao, Y., Teng, S. S., et al. (2012). A brain-computer interface based attention training program for treating attention deficit hyperactivity disorder. PLoS One, 7(10), e46692. https://doi.org/10.1371/journal.pone.0046692.
Limerick, H., Coyle, D., & Moore, J. W. (2014). The experience of agency in human-computer interactions: a review. Frontiers in Human Neuroscience, 8, 643. https://doi.org/10.3389/fnhum.2014.00643.
Looned, R., Webb, J., Xiao, Z. G., & Menon, C. (2014). Assisting drinking with an affordable BCI-controlled wearable robot and electrical stimulation: a preliminary investigation. Journal of Neuroengineering and Rehabilitation, 11, 51. https://doi.org/10.1186/1743-0003-11-51.
Lumer, C. (2017). Automatic actions: agency, intentionality, and responsibility. Philosophical Psychology, 1–29. https://doi.org/10.1080/09515089.2017.1291928.
Mak, J. N., & Wolpaw, J. R. (2009). Clinical applications of brain-computer interfaces: current state an future prospects. IEEE Reviews in Biomedical Engineering, 2, 187–199. https://doi.org/10.1109/rbme.2009.2035356.
Martel, A., Dahne, S., & Blankertz, B. (2014). EEG predictors of covert vigilant attention. Journal of Neural Engineering, 11(3), 035009. https://doi.org/10.1088/1741-2560/11/3/035009.
Mele, A. R. (1997). Agency and mental action. Noûs, 31, 231–249. https://doi.org/10.1111/0029-4624.31.s11.11.
Mele, A. R. (1992). Springs of action: understanding intentional behavior. New York: Oxford University Press.
Metzinger, T. K. (2017). The problem of mental action. In T. K. Metzinger & W. Wiese (Eds.), Philosophy and predictive processing. Theoretical Philosophy/MIND Group – JGU Mainz.
Metzinger, T. (2013). The myth of cognitive agency: subpersonal thinking as a cyclically recurring loss of mental autonomy. Frontiers in Psychology, 4. https://doi.org/10.3389/fpsyg.2013.00931.
Moore, J. W., & Fletcher, P. C. (2012). Sense of agency in health and disease: a review of cue integration approaches. Consciousness and Cognition, 21(1), 59–68. https://doi.org/10.1016/j.concog.2011.08.010.
Moore, J. W. (2016). What is the sense of agency and why does it matter? Frontiers in Psychology, 7. https://doi.org/10.3389/fpsyg.2016.01272.
Moore, M. S. (2010). Act and crime: the philosophy of action and its implications for criminal law. Oxford: Oxford Univ. Press.
Mullin, E. (2017). Reached via a mind-reading device, deeply paralyzed patients say they want to live, Technology Review, January 31, 2017, web: https://www.technologyreview.com/s/603512/reached-via-a-mind-reading-device-deeply-paralyzed-patients-say-they-want-to-live/?_e_pi_=7%2CPAGE_ID10%2C9103325744
Mühl, C., Allison, B., Nijholt, A., & Chanel, G. (2014). A survey of affective brain computer interfaces: principles, state-of-the-art, and challenges. Brain-Computer Interfaces, 1(2), 66–84. https://doi.org/10.1080/2326263X.2014.912881.
Nicolelis, M. (2011). Beyond boundaries. The new science of connecting mind with machines—and how it will change our lives. New York: Henry Holt and Company.
O’Connor, T. (2002). Persons and causes: the metaphysics of free will. Oxford: Oxford University Press.
Pacherie, E. (2007). The sense of control and the sense of agency. Psyche, 13(1), 1–30.
Palmer, J. (2009, 24 April 2009). The multimodal brain orchestra performed its world premiere on Thursday. BBC News. Retrieved from http://news.bbc.co.uk/go/pr/fr/-/2/hi/science/nature/8016869.stm
Perdikis, S., Leeb, R., Williamson, J., Ramsay, A., Tavella, M., Desideri, L., et al. (2014). Clinical evaluation of BrainTree, a motor imagery hybrid BCI speller. Journal of Neural Engineering, 11(3), 036003. https://doi.org/10.1088/1741-2560/11/3/036003.
Proust, J. (2001). A plea for mental acts. Synthese, 129(1), 105–128.
Requarth, T. (2015). This is your brain. This is your brain as a weapon, Foreign Policy web: http://foreignpolicy.com/2015/09/14/this-is-your-brain-this-is-your-brain-as-a-weapon-darpa-dual-use-neuroscience/; Accessed: July, 12, 2017.
Rojahn, S. Y. (2013). Samsung demos a tablet controlled by your brain. MIT Technology Reviews. Retrieved from https://www.technologyreview.com/s/513861/samsung-demos-a-tablet-controlled-by-your-brain/website.
Roxin, C. (1962). Zur Kritik der finalen Handlungslehre. Zeitschrift für die gesamte Strafrechtswissenschaft, 74(4), 515–561.
Ryle, G. (1949). The concept of mind. Chicago: University of Chicago Press.
Sato, A., & Yasuda, A. (2005). Illusion of sense of self-agency: discrepancy between the predicted and actual sensory consequences of actions modulates the sense of self-agency, but not the sense of self-ownership. Cognition, 94(3), 241–255. https://doi.org/10.1016/j.cognition.2004.04.003.
Schlosser, M. (2011). Agency, ownership, and the standard theory. In J. Aguilar, A. Buckareff, & K. Frankish (Eds.), New waves in philosophy of action (pp. 13–31). Basingstoke: Macmillan.
Schlosser, M. E. (2010). Bending it like Beckham: movement, control and deviant causal chains. Analysis, 70(2), 299–303. https://doi.org/10.1093/analys/anp176.
Sellers, E. W., Vaughan, T. M., & Wolpaw, J. R. (2010). A brain-computer interface for long-term independent home use. Amyotrophic Lateral Sclerosis, 11(5), 449–455. https://doi.org/10.3109/17482961003777470.
Shepherd, J. (2015). Conscious control over action. Mind & Language, 30(3), 320–344. https://doi.org/10.1111/mila.12082.
Soon, C. S., He, A. H., Bode, S., & Haynes, J. D. (2013). Predicting free choices for abstract intentions. Proceedings of the National Academy of Sciences of the United States of America, 110(15), 6217–6222.
Sreedharan, S., Sitaram, R., Paul, J. S., & Kesavadas, C. (2013). Brain-computer interfaces for neurorehabilitation. Critical Review in Biomedical Engineering, 41(3), 269–279.
Strawson, G. (2003). Mental ballistics or the involuntariness of spontaneity. Proceedings of the Aristotelian Society, 103, 227–256.
Stout, R. (2006). Action. Montreal: McGill-Queen’s University Press Retrieved from http://site.ebrary.com/id/10455594.
Synofzik, M., & Vosgerau, G. (2012). Beyond the comparator model. Consciousness and Cognition, 21(1), 1–3. https://doi.org/10.1016/j.concog.2012.01.007.
Synofzik, M., Vosgerau, G., & Newen, A. (2008). I move, therefore I am: a new theoretical framework to investigate agency and ownership. Consciousness and Cognition, 17(2), 411–424. https://doi.org/10.1016/j.concog.2008.03.008.
Tamburrini, G. (2009). Brain to computer communication: ethical perspectives on interaction models. Neuroethics, 2(3), 137–149. https://doi.org/10.1007/s12152-009-9040-1.
Tidoni, E., Gergondet, P., Kheddar, A., & Aglioti, S. M. (2014). Audio-visual feedback improves the BCI performance in the navigational control of a humanoid robot. Frontiers in Neurorobotics, 8, 20. https://doi.org/10.3389/fnbot.2014.00020.
Vierkant, T. (2018). Choice in a two systems world: picking & weighing or managing & metacognition. Phenomenology and the Cognitive Sciences, 17(1), 1–13. https://doi.org/10.1007/s11097-016-9493-8.
Vlek, R., van Acken, J.-P., Beursken, E., Roijendijk, L., & Haselager, P. (2014). BCI and a User’s judgment of agency. In G. Grübler & E. Hildt (Eds.), Brain-computer-interfaces in their ethical, social and cultural contexts (Vol. 12, pp. 193–202). Dordrecht: Springer Netherlands Retrieved from http://link.springer.com/10.1007/978-94-017-8996-7_16.
Wang, Y., & Jung, T.-P. (2011). A collaborative brain-computer Interface for improving human performance. PLoS One, 6(5). https://doi.org/10.1371/journal.pone.0020422.
Wegner, D. M., Sparrow, B., & Winerman, L. (2004). Vicarious agency: experiencing control over the movements of others. Journal of Personal and Social Psychology, 86(6), 838–848. https://doi.org/10.1037/0022-35220.127.116.118.
Wegner, D. M. (2002). The illusion of conscious will. Cambridge, MA: The MIT Press.
Wenzlaff, R. M., & Wegner, D. M. (2000). Thought suppression. Annual Review of Psychology, 51(1), 59–91. https://doi.org/10.1146/annurev.psych.51.1.59.
Wolpaw, J., & Wolpaw, E. W. (2012). Brain–computer interfaces: something new under the sun. In J. Wolpaw & E. W. Wolpaw (Eds.), Brain-computer interfaces: Principles and practice (pp. 3–12). Oxford: Oxford University Press.
Wu, W. (2013). Mental action and the threat of automaticity. In A. Clark, J. Kiverstein, & T. Vierkant (Eds.), Decomposing the will (pp. 244–261). Oxford, New York: Oxford University Press.
Zander, T. O., & Kothe, C. (2011). Towards passive brain-computer interfaces: applying brain-computer interface technology to human-machine systems in general. Journal of Neural Engineering, 8(2), 025005. https://doi.org/10.1088/1741-2560/8/2/025005.
Zander, T. O., Kothe, C., Jatzev, S., & Gaertner, M. (2010). Enhancing Human-Computer Interaction with Input from Active and Passive Brain-Computer Interfaces. In D. S. Tan & A. Nijholt (Eds.), Brain-Computer Interfaces (pp. 181–199). London: Springer London.
This paper has been written in the interdisciplinary research project INTERFACES funded by the Federal Ministry of Education and Research (BMBF) in Germany and ERA-NET Neuron in the European Research Area Network for Neuroscience Research. Further, we would like to extend our gratitude to the following people for letting us visit their lab and helping us to improve our understanding of brain-computer interfaces: Benjamin Blankertz and his team (TU Berlin), Gernot Müller-Putz and his team (TU Graz), Michael Tangermann and his team (University of Freiburg), and Andrea Kübler and her team (University of Würzburg), who have also let us use a BCI.
About this article
Cite this article
Steinert, S., Bublitz, C., Jox, R. et al. Doing Things with Thoughts: Brain-Computer Interfaces and Disembodied Agency. Philos. Technol. 32, 457–482 (2019). https://doi.org/10.1007/s13347-018-0308-4
- Brain-computer interface
- Brain-machine interface
- Action theory
- Causal theory of action
- Mental action, freedom of thought
- Standard theory