Advertisement

Wired Emotions: Ethical Issues of Affective Brain–Computer Interfaces

  • Steffen SteinertEmail author
  • Orsolya Friedrich
Open Access
Original Research/Scholarship

Abstract

Ethical issues concerning brain–computer interfaces (BCIs) have already received a considerable amount of attention. However, one particular form of BCI has not received the attention that it deserves: Affective BCIs that allow for the detection and stimulation of affective states. This paper brings the ethical issues of affective BCIs in sharper focus. The paper briefly reviews recent applications of affective BCIs and considers ethical issues that arise from these applications. Ethical issues that affective BCIs share with other neurotechnologies are presented and ethical concerns that are specific to affective BCIs are identified and discussed.

Keywords

Affective brain–computer interface Emotion Brain–computer interface Affective states 

Introduction

Research on brain–computer interfaces (BCIs) is flourishing and is attracting more and more attention and investment. For example, in 2016 the well-known entrepreneur Elon Musk co-founded the neurotechnology company Neuralink that aims to create BCIs, and Facebook has a secret hardware project that works on BCIs (Marsh 2018). Brain–computer interfaces have a wide range of application by enabling disembodied agency, that is acting without moving the body (Steinert et al. 2018). Affective BCI is a technology that is able to detect, influence and stimulate affective states. Whereas brain–computer interfaces in general have already received a fair amount of ethical and theoretical treatment, the sub-field of affective brain–computer interfaces has not yet received the ethical scrutiny that it deserves. This paper seeks to close this gap.

A few clarifying remarks: Affective states are experiential phenomena like emotions and moods. Emotions are intentional mental states because they involve a relation between the person and something else (i.e., the object of the emotion). For example, one is angry with someone or afraid of something. Further, emotions involve evaluations of something, emotions are usually accompanied by bodily feelings, and emotions are motivational. In contrast, moods are usually long-term, not intentional and more diffuse.

Affective states are important because they are closely linked to values and emotions, particularly, play a crucial role in moral judgment (Roeser and Todd 2014). Further, emotions play a central role in human life, as they are important in interpersonal relationships, contribute to group formation and play a role in decision making and reasoning. Because affective states are one of the essential ways in which humans engage with the world it is critical to accompany the development of affective BCIs with ethical reflection as early as possible.

Affective BCIs: Recent Trends and Applications

What are affective BCIs and what are they used for? Affective BCIs work like other brain–computer interfaces in that they read out neural signals that are then used to perform a certain task (Mühl et al. 2014). An affective BCI is a system that uses neurophysiological signals to extract features that are related to affective states (e.g., emotions and moods). Brain signals can be measured invasively or non-invasively. Invasively means that electrodes are inserted into the body. One example of an invasive method is electrocorticography (ECoG) where electrodes are placed on the surface of the brain in order to measure the activity of the cerebral cortex. In contrast, non-invasive neurotechnology measures the brain activity from outside the head. For instance, electroencephalography (EEG) uses electrodes that are placed on the surface of the skull. Another non-invasive method to monitor brain activity is functional near-infrared spectroscopy (fNIRS) where near-infrared light is used to pick up on changes in the brain’s blood oxygen level that are linked to brain activity. The output signals can be used as feedback to the user or as input for computer systems, or both. Accordingly, the detection of affective states via affective BCI can be used to modify human–computer interaction. Affective BCIs may be located within the field of affective computing that, among other things, seeks to utilize information about affective states to enhance the interaction with computers (Picard 2000). Of course, affective BCIs are not the only way to detect affective states. It is also possible to utilize physiological (e.g., blood pressure) or behavioral (e.g., facial recognition) signals instead of neurophysiological signals, or even combine different modalities to enhance classification accuracy (Chanel et al. 2009).

It is worth pointing out here that research on affective BCI is an emerging field and current affective BCI technology cannot smoothly be applied to most real-world contexts yet. While mapping and detecting emotions via EEG is difficult, a lot of advances in the area of classifying discrete emotions (e.g., fear, surprise, disgust) have recently been made (Bono et al. 2016; Lee and Hsieh 2014). These advances have prompted some authors (e.g., Lin et al. 2015, 319) to express confidence that affective BCI systems for everyday use are feasible in the near future. So, while some of the applications considered in the paper are to a certain extent speculative, they nevertheless give us a glimpse of what will (sooner or later) be possible.

In recent years, there have been some major advances in the technological ability to recognize affective states. For example, Wu et al. (2017) report a novel method involving EEG that recognizes negative and positive emotional states with high accuracy. The authors propose that their method could be used in wearable EEG systems that monitor emotions on an everyday basis. The accurate detection of emotions could be utilized in other areas as well. For instance, Ali et al. (2016) suggest that their EEG-based approach to emotion detection can be helpful in the context of healthcare, e.g. in ambient assisted living facilities.

Besides detecting affective states, it is also possible to use affective brain–computer interfaces to stimulate and influence the affective states of people. Daly et al. (2016) developed an affective BCI system that can detect the current affective state and modulate it by playing emotionally evocative music, thereby moving people from one affective state to another. For example, participants could be moved from a neutral state to feeling happy or from an excited state to a calm state. Other researchers also used music combined with affective BCI systems to influence the affective state of the subjects (Ehrlich et al. 2017).

When there is a continuous interaction between brain–computer interface systems and brain activity this is called a closed-loop system. Another area where affective brain–computer interfaces have been said to be helpful is in the deep brain stimulation of the limbic circuit of people with emotional disorders. For example, a closed-loop system comprised of an emotion decoder and a stimulation device could serve as an ‘emotional prosthesis’ (Widge et al. 2014). Such an emotional prosthesis could be used to ameliorate the painful memories of traumatic events.

Affective BCIs can also facilitate emotion expression. In particular, patients with severe motor impairments, like amyotrophic lateral sclerosis (ALS), which is a group of neuronal diseases that mainly causes the degeneration of neurons that control voluntary muscle movements, find it hard to express their emotions (Kashihara 2014). Affective BCIs can give patients the opportunity to express their emotions, thereby increasing their quality of life (Nijboer et al. 2009).

Affective BCI technology need not be limited to therapeutic applications, the medical context and scientific research. Andujar et al. (2015) hypothesize that an affective BCI could also be helpful in non-face-to-face communication by displaying the emotional status of the communication partner. Further, a wearable device (e.g., bracelets or rings) could inform the wearers, and others, that they are in a particular affective state (Hao et al. 2014). Thereby, an affective BCI may help one to express affective states in a non-conventional way. Similarly, a way to broadcast people’s affective states via affective BCI are so-called artistic BCIs, in which the affective state of the user is influenced (e.g., by sound or image) and then represented “[…] visually or through a type of audio where the corresponding user and others are able to perceive visually or audibly how the user is feeling.” (Andujar et al. 2015, 62).

Affective BCIs could also be used in the entertainment sector. For example, Brouwer et al. (2015) present an affective BCI system that picks up the affective states of the users while they are reading a novel. Based on the changing affective states during reading, the system provides a particular version of the section of the novel. Further, levels of frustration or joy could be used to adapt a computer application to the affective state of the user. Based on research on the classification of sadness and happiness using EEG (Pan et al. 2016) and research on the neurophysiological underpinnings of frustration (Myrden and Chau 2017; Reuderink et al. 2013), one can easily envision a computer application that adapts to these affective states of the user. A potential field for such adaptive computer applications is computer games, where information about the affective state of the user could be used to change how the game is presented or how the game unfolds in order to match or influence the affective state of the player (Andujar et al. 2015). This means that the game will be more individualized to fit the respective player. Everybody would be playing a different game.

Some consumer products that utilize affective states are already on the market. For instance, Mico, developed by the Japanese company Neurowear, is a headphone that selects music based on the mood of the wearer. Further, Neurocam, by the same company, is a wearable camera that detects the emotions of the user and automatically takes a snapshot in moments where the user is emotionally engaged (Neurowear 2018).1 A domain where affective BCIs have already been applied is in the music industry. For instance, an affective BCI has been successfully used to measure the affective states of the listeners, also of the performer during a live performance and to make the system adapt to each respective affective states (Eaton et al. 2015), thus harmonizing the affects. Furthermore, detecting the listener’s affective state may enable individualized pieces of music, as the system can adapt to the affective state in real-time. Other possible applications for affective BCIs regarding music are described by Andujar et al. (2015).

Affective BCIs and Ethical Issues

The studies referenced above provide ample indication that highly sophisticated forms of detecting affective states are feasible. As emotions play a vital part in people’s lives and are a crucial aspect of what it means to be human, the ethical implications of these developments should be reflected. Of course, not all of the ethical issues that arise in connection with affective BCIs are completely new. There are some ethical issues, like harm-benefit evaluations and how to deal with the collection of sensitive data, that affective BCIs share with similar neurotechnologies, particularly other types of BCIs. These ethical issues will be briefly addressed in this section and the main focus of the remainder of the paper is on the unique ethical challenges that are raised by affective BCIs. These challenges have to do with the capabilities of affective BCIs to monitor, influence and directly stimulate the affective states of people. The table below encapsulates the ethical issues that affective BCIs have in common with other forms of BCI and the ethical challenges specific to affective BCIs (Table 1).
Table 1

Ethical issues of affective BCIs

Common ethical issues of affective and other BCIs

Specific ethical issues related to affective BCIs

Monitoring of affective states

Influencing affective states

Directly stimulating affective states

Risk to the body (e.g., infections, damage to tissue)

Evaluation benefit—harm

Data security and privacy

Potentially false expectations

Informed consent

Problems of shared control, criminal guilt and liability

Impact on self, agency, identity and personhood (e.g. through self-quantification)

Biases embedded in the device

Self-tracking of emotions could infringe on autonomy and authenticity

Fostering of emotion-stereotypes

Alienation from one’s own emotions

Social pressure to self-regulate or enhance control over emotions

Manipulation of affective processes and thereby of intentions, decisions, actions

Novel threats to mental integrity and cognitive liberty

New ways of nudging/emotional influence by companies or government

Issues with living in an automatically emotion-adjusted environment

Outsourcing of emotion regulation

Responsibility for emotions

Questions of what it means to be human

In closed-loop systems: issues of emotional self-regulation and responsibility

Non-authentic emotions

Undermining sense of self, agency and self-determination

Potential self-estrangement when emotions conflict with judgment

Problems in assessing the origin of an emotion

Psychological distress as a harm

Changes in autobiographic memory, sense of self, identity

Issues of personhood

Responsibility ascription (e.g., manipulation of emotions in military context)

Affective BCIs share certain ethically relevant issues, like risks to the body, data protection and informed consent, with other neurotechnologies. Affective BCIs can take an invasive form, where the technology is embedded in the brain. Here there is the risk of infection or brain tissue injuries. Because the avoidance of harm is a basic value in medical ethics, the well-being of the patient, the benefits of the procedure and the potential harm of the intervention need to be balanced carefully. So, similar to other invasive neurotechnologies, the ethical evaluation of benefit and harm is crucial when it comes to the use of invasive BCIs (Glannon 2014, 2016), and invasive affective BCIs are no exception here.

When affective BCIs are deployed in a medical or research context, two issues that need to be addressed are the management of expectation and informed consent (Klein 2016; McCullagh et al. 2014; Vlek et al. 2012). A person’s self-determination is an important ethical value and a person needs to understand the potential risks of every medical intervention before consenting to the procedure. Understanding the (long-term) consequences of detecting, influencing and stimulating affective states via affective BCIs can be difficult and therefore, the process of informed consent requires particular attention.

All BCI systems collect sensitive data, which is why the issues of data security, privacy and neuro-hacking need to be addressed (Attiah and Farah 2014; Ienca and Haselager 2016; Jebari 2013; Klein 2016; O’Brolchain and Gordijn 2014). These issues also need to be tackled when it comes to affective BCI because affective BCIs collect data about affective states, which is a very sensitive topic for most people. Data about affective states belong to an individual’s personal data and therefore need to be protected from any undue treatment by other parties. Given that affective BCI systems will also include elements that are not fully under the control of the user, there are some well-known concerns, like shared control and criminal guilt, that have already been addressed concerning other BCI applications (Grübler 2011; Lucivero and Tamburrini 2008; O’Brolchain and Gordijn 2014; Tamburrini 2009; Weinberger and Greenbaum 2016). Recently, researchers have called for a veto control for semi-autonomous BCI systems (Clausen et al. 2017). This type of veto control also seems to be something that is worth thinking about regarding affective BCI systems. At the very least, users of affective BCIs should be enabled to understand what the system does and why, and what kind of data are collected and processed.

Affective BCIs and Monitoring of Affective States

In addition to the ethical concerns shared with other neurotechnologies, there are several ethical challenges that are unique to affective BCIs by virtue of their potential to monitor, influence and stimulate affective states. Some of these ethical issues, for example, autonomy (Friedrich et al. 2018), have already been addressed in the literature on other BCIs. Nevertheless, these ethical issues are important for affective BCIs as well and will be briefly addressed where necessary.

There is a distinction to be drawn between directly stimulating affective states, influencing affective states and monitoring affective states. Affective BCIs may be used for all three. This section addresses ethical issues that arise from the ability of affective BCIs to monitor affective states. The information gathered from this monitoring could subsequently be used to manipulate or induce affective states. However, even without the additional manipulation, the monitoring itself is in need of ethical scrutiny.

Similar to tracking sleep, exercise and one’s heartbeat via devices and apps, tracking affective states are no longer off limits. Not surprisingly, tracking people’s emotions will be of interest to parties with economic motivations (e.g., marketing research) and in areas where customer satisfaction is an important factor. There are already companies that use technology, like smart identification badges that monitor speech (Heath 2016), to observe the emotions of employees in order to increase performance or obedience at the workplace. Affective BCIs would open up new opportunities for this kind of employee tracking by making possible a more precise monitoring. Similar to other brain reading technologies, the monitoring of affective states raises questions concerning mental privacy because it potentially allows for the detection of mental states that the subject may not wish to share. The use of affective BCIs can be linked to the general ethical discussion regarding mental privacy and the monitoring of mental states. Here, the ethical evaluation of the implications of affective BCIs can fall back on existing contributions. For example, Mecacci and Haselager (2017) helpfully provide a framework for the assessment of the implications of brain reading for mental privacy. This framework may also be used to assess the ethical challenges concerning mental privacy when affective BCIs are used to monitor affective states.

Monitoring emotions is not limited to the workplace or other professional contexts. There are applications available for emotional self-tracking and so-called emotional self-quantification (e.g., Mercuryapp, or EmotionSense). Both self-tracking practices and self-quantification have some ethical and cultural implications that need to be addressed. Lupton (2015) suggests that apps that track people’s sexual behavior may foster normative stereotypes about sex. By analogy, it is not very far-fetched to suspect that affective BCIs may have similar implications in that they could foster stereotypes concerning emotions. Closely connected to stereotypes is the issue of biases. Some authors have already pointed out the problem of biases embedded in neural devices (Yuste et al. 2017). Bias is an ethical issue that pertains to all forms of BCIs. However, the particularly crucial aspect in the case of affective BCI is that there are potential biases regarding affective states. For example, people have biases about emotions that are based on gender or age (Fabes and Martin 1991). So, it is a sensible idea to make sure that biases concerning emotions are not embedded in affective BCI technology. Further, other authors have raised concerns regarding the disciplining effects of self-tracking and that self-tracking could infringe on values like autonomy and authenticity (Sharon 2017). The same concerns, then, need to be taken seriously regarding the tracking of affective states in general, and the tracking via affective BCI in particular.

This does not rule out that monitoring affective states via affective BCIs could enhance autonomy and contribute positively to one’s well-being. For example, neurofeedback has been shown to be a valuable aid in the regulation of brain areas responsible for emotions (Johnston et al. 2010). Especially affective BCIs that provide some feedback regarding the emotional states of the user may help to gain some control over these states. However, this puts another ethical issue into the spotlight: The possibility of affective BCI-systems with real-world applicability may put social pressure on some individuals to self-regulate their emotions with the help of affective BCIs in order to fall within the domain of what is considered affectively ‘normal’.

Using an affective BCI may also have some repercussions on the ability to reflect on and engage with emotions and for some people the potential comprehensive monitoring ability of affective BCIs may result in an alienation from their emotions. Recall the camera, described in the introduction, that automatically takes pictures whenever one is emotionally engaged or the sound system that plays music according to the mood one is in. In these two cases, there is only limited need for people to pay attention to their emotions and reflect on whether it is worthwhile to take a picture or to think about which music best suits their mood. The technology takes care of these decisions by automatically making the choice for the user. In cases like these, the ability to reflect on an emotion and deliberate whether to act on that emotion is compromised by the affective BCI. This reflection and deliberation, however, is a crucial component of being a moral agent. The role of affective states in human life, the ability of humans to notice, to control and to cultivate emotions in order to be a moral person has been a key issue of ethics throughout history. If people do not have to take care of their affective states because of affective BCIs, reconsiderations of relevant presumptions about human conduct could become necessary.

Affective BCIs and Influencing Affective States

Besides monitoring affective states, another relevant ethical issue that needs to be addressed is that affective BCIs can be utilized to influence affective states. This section addresses ways of influencing emotions that are not invasive. That means that the affective BCI system does not directly and invasively interfere with brain processes. Ethical issues that arise in connection with directly and invasively stimulating affective states in people are addressed in the next section. Please also note that the above-mentioned ethical challenges regarding monitoring of affective states may also play a role here because both influencing and directly stimulating affective states may rely on monitoring affective states in some form or other.

One possible way to influence affective states that affective BCIs could facilitate is nudging. Broadly speaking, nudging refers to interventions that influence people’s behavior without forcing them to commit a certain act (Sunstein 2015, 417). A familiar example is the fly in urinals that nudge users to aim at a certain spot. Another example is reminders or push notifications on smartphone applications. Digital technology is especially suited for a variety of forms of nudging that can respond flexibly to changes in user behavior. Affective BCIs seem to be optimal instruments for nudging, because decisions and emotions go hand in hand. Emotions contribute to the evaluations that people make and individuals usually take current and expected future emotions into account when they ponder a decision (Bagozzi et al. 2016; Mellers and McGraw 2001; Wilson and Gilbert 2005). Further, it is well established that emotions influence judgment and decision-making (Angie et al. 2011). In short, emotions shape intentions, decisions and actions. So, in many situations, influencing emotions of people means influencing their decisions and intentions and the actions that follow these intentions.

Technologies like affective BCIs allow for the manipulation of affective processes of humans. This intervention could infringe on the mental integrity of people. Mental integrity is the capacity of persons to have control over their mental states and brain data. This control entails that without consent nobody can monitor or manipulate these mental states or brain data (Lavazza 2018). Based on the ever-increasing technical ability to intervene in mental processes and the possible threat to mental integrity and cognitive liberty, some authors have argued for a legal protection of the mental realm (Bublitz and Merkel 2014). Future research should consider in more detail the potential implications of affective BCIs for mental integrity and cognitive liberty. Please note here that matters of cognitive liberty and mental integrity also apply for more direct forms of intervention in affective states, that are addressed in the next section.

Imagine an affective BCI-system that constantly reads the emotional state of the user. This kind of information is a valuable resource for companies and governments that are inclined to influence or nudge people to make certain economic or political decisions. Already today there seems to be increasing (mis)use of emotions in politics. Particularly the 2016 presidential election in the United States has brought into sharp focus the connection between technology and the manipulation of the feelings of voters. Artificial intelligence in the form of machine learning and social media was used to micro-target people in order to influence their emotions (Ghosh and Scott 2018; Polonski 2017). Some scholars even see the increasingly technologically mediated influence of emotions as a threat to democracy. For example, the historian Yuval Noah Harari cautions that because of the ability to manipulate emotions by advanced technology, ‘democratic politics will mutate into an emotional puppet show’ (Harari 2018, 68).

When affective BCIs are used in nudging schemes, well-known ethical issues of nudging come to the fore. Some authors have expressed the worry that nudging is detrimental to fairness and freedom (Goodwin 2012). Others have argued against these criticisms, for example by pointing out that nudging may promote autonomy if it steers behavior towards a direction that is in line with one’s own values and character (Sunstein 2015). Using affective BCIs in order to nudge people can be beneficial. Consider an affective BCI that has registered that the users are more inclined to use medication when they are in a certain affective mental state and, perhaps in collaboration with an ambient assisted living system, utilizes this information to nudge them to take their medicine. The benefits in this scenario are obvious. However, the same affective BCI may play a role in a scenario where information about the affective state of the users is used to influence them politically or to nudge them into buying certain goods. While noting that nudging is a complex ethical issue, it is nevertheless important to draw attention to whether and when it is ethically appropriate to use affective BCIs as nudging tools and whether affective BCI research should pursue designs that lend themselves to nudging.

Emotions play a crucial role in decision-making, and particularly in the evaluation of products and the decision to buy them. Coleman and Williams (2013) demonstrate how people’s social identity is connected to a specific emotion profile and that consumers prefer emotional messages that are compatible with their social identity. For example, when individuals are primed with their athlete identity, they find anger-based advertisement more persuasive because anger is consistent with the emotion profile of their social identity as athletes. Given the tight connection between consumer decisions and emotions, it is no surprise that companies want to get their hands on information about people’s emotions in order to target them. For example, Facebook has a history of influencing the emotions of its users. In a widely reported study, Facebook manipulated the news feed of users in order to assess the effect of this manipulation on their emotions (Kramer et al. 2014). Further, a recently leaked Facebook document includes the claim that the company’s algorithms can detect the emotional states of their users, allowing advertisers to determine the right moment when teenagers are in need of a ‘confidence boost’ (Levin 2017), which is another way of saying that they are a good target for advertising. Thinking even further, affective BCI allows for distinct access to the affective states of prospective customers, which in turn can be utilized to create input according to the emotion profile of particular individuals or to emotionally influence people in such a way that makes them more likely to buy a specific product.

Affective BCIs could be used to influence human emotions through an adjustment of that person’s environment. Consider this: As devices become more and more connected, and ambient living and the so-called internet of things become feasible, affective BCIs could in principle be connected to all kinds of devices and smart surroundings. For example, an affective BCI may alter the environment via an ambient lighting system (Andujar et al. 2015), either to match the affective state of the users or to influence their emotions. For instance, when an affective BCI user is angry, their apartment’s lighting could adjust automatically in order to help them calm down. In a scenario like this, the question may be raised about how much the person was actually in charge of the emotional regulation and how much of it was due to the smart interconnected environment. Ultimately, affective BCIs may prompt us to do the ‘symbolic labour’ (Schermer 2009, 221) of re-interpreting and re-conceptualizing the idea of responsibility for emotions.

Although responsibility ascription is usually limited to actions, there is a case to be made that people are also responsible for their emotions because they can be subjected to emotional self-regulation (Roberts 2015). Affective BCIs complicate this responsibility issue, because emotional self-regulation may (in part) be outsourced to the affective BCI-system, which raises the question of how much ‘self’ is actually involved in emotional regulation. As hinted at previously, techniques for controlling and regulating emotions are of fundamental ethical relevance and have played a crucial role in philosophy, psychology and psychotherapy (Charland 2007). New ways of technologically regulating emotions are ethically relevant and need much more consideration. It is prudent to get a head start and to think about these ethical (and conceptual) implications of plausible affective BCI applications before the technology is too far along and much of the ethical reflection is futile. Of course, for new and emerging technologies like affective BCIs it is hard to consider in advance the ethical and social implications. Even harder to grasp are the potential consequences of novel technologies for what it means to be human. Because of these difficulties we should be open to novel ways of exploring these issues. For example, Roeser et al. (2018) have demonstrated that art can be helpful in the ethical reflection on brain–computer interfaces. Extending this idea, one may expect that art will also serve us well in grasping the implications of affective BCIs.

Affective BCIs and Directly Stimulating Affective States

So far, the ethical aspects of indirectly (or non-invasively) influencing affective states with affective BCIs have been discussed. However, affective BCIs may also enable a more invasive and direct way to influence people’s affective states. Eliciting affective responses from people by means of brain stimulation requires ethical considerations.

It is already possible to directly stimulate affective states via invasive technology. For example, electric stimulation of the amygdala can induce negative emotions (e.g., fear) and happiness (Lanteaume et al. 2007). Although closed-loop brain stimulation is still in its early stages, it is conceivable to set up an affective BCI system as a closed-loop system. A closed-loop system receives continuous feedback from the brain and stimulates brain activity accordingly. So, a closed-loop affective BCI system would automatically stimulate specific brain areas in order to bring about or suppress certain affective states. This has ethically relevant implications: Closed-loop affective BCI systems put some pressure on the relation between emotional self-regulation and responsibility in that the machine, and not the user, does the regulating. Further, there is already a precedent when it comes to the possible negative effects of stimulating mental states with closed-loop systems. It has been argued that deep brain stimulation (DBS), that is a technique for sending electrical impulses to the brain via implants, may potentially undermine agency and personal identity (Goering et al. 2017) and that DBS could also lead to self-estrangement (Gilbert et al. 2017).

The technology of DBS could be problematic when it is used to directly stimulate affective states and people actually worry about what this technology does to their emotions. In interviews with participants of DBS trials, people expressed the concern that DBS could be used to bring forth emotions that are not authentic, thereby undermining their sense of self (Klein et al. 2016). In light of this, it seems worthwhile to accompany the development and implementation of affective BCI systems with an assessment of potentially sensitive issues. For instance, what happens when an affective BCI-induced emotion is in conflict with the evaluative judgment of the person? Even without affective BCIs these so-called recalcitrant emotions are a common occurrence. For instance, despite their belief that tiny dogs and flying are not dangerous at all, some people experience fear when they encounter small dogs or when they have to fly (phobias are a pervasive form of recalcitrant emotion). Further, people sometimes have recalcitrant bouts of anger or jealousy that conflict with their judgment about a situation. However, despite being a common occurrence, recalcitrant emotions can be a somewhat confusing experience. Further, not being able to differentiate whether an affective state originated from oneself or was triggered by the affective BCI system may be very disturbing. Provided that harm should be prevented whenever possible, the practical recommendation here seems to be to make sure that the potential for psychological distress is kept at a minimum. The real and potential power of affective BCIs to manipulate emotions calls for ethical scrutiny.

Although the impact of neurotechnology and BCI on the self and personhood has already received some attention (Fenton and Alpert 2008; Glannon 2016; Hildt 2015; Tamburrini 2009), the role of emotions in these issues needs to be considered more thoroughly. Emotions are important for a sense of self and personal identity. For instance, emotions play a crucial part in the constitution of autobiographical memories (Holland and Kensinger 2010). In turn, autobiographical memories are crucial for the constitution of the self and the sense of self (Prebble et al. 2013; Schechtman 1996, 2005). It seems then that the manipulation of emotions has a direct bearing on the constitution of the self. Given that affective BCIs can potentially aid such a manipulation, and given that emotions are a crucial aspect of what it means to be human, the possible consequences of this manipulation regarding the self, identity and personhood should not be taken lightly.

The military is one area where manipulating and stimulating affective states will likely play a crucial role. It is no secret that the military is very interested in using neurotechnology, including BCIs, for military purposes like vehicle control, military training and the enhancement of soldiers (Tennison and Moreno 2012). Specifically, influencing the affective states of soldiers has been said to have advantages, as it may help to ameliorate traumatic experiences after combat or attenuate emotions like anger, which could lead to atrocities (Beard et al. 2016). Further, soldiers are required to control their emotions and build so-called emotional fitness in order to become more resilient (Howell 2015). Affective BCIs could be another tool to achieve the goal of emotion control and emotional fitness in soldiers. Consider the possible uses of affective BCIs for the suppression of fear and empathy, or the use of affective BCIs to modulate anger. Military applications of neurotechnology and enhancement for military purposes involve a host of ethical issues (Beard et al. 2016; Moreno 2012) that also pertain to affective BCIs. For example, affective BCIs may be used to dampen certain emotions in soldiers (e.g., remorse, empathy, or fear) so that they are more aggressive and courageous. However, altering the emotions of soldiers in this way raises crucial questions of responsibility ascription and how much this interference affects moral decision-making.

Conclusion

Although the development of affective BCIs is still at an early stage, concrete ethical issues can already be identified and should be discussed. Some ethical issues, like bodily harm or data security, are not new but pertain to all neurotechnologies. While acknowledging this, this paper went beyond these common issues and introduced with potential ethical issues that are particular to affective BCI technology. Specifically, the paper considered ethical concerns regarding monitoring, influencing and directly stimulating affective states.

Some use contexts of affective BCIs require a keener eye on the ethical issues than others. Generally, affective BCI technology appears to be less problematic when the applications do not involve a direct stimulation of affective states. Directly manipulating affective states is a bigger intervention into the mental set-up of a person, with potentially longer lasting consequences that may include changes that are irreversible. Further, scientific research and clinical applications seem to be the least problematic contexts for using affective BCIs because there are strict regulations and procedures that seek to limit harm as far as possible and that include informing people about the underlying technology and its risks and benefits. Nonetheless, some of the ethical concerns identified in this paper, like problems with false expectations or informed consent, are important in the clinical applications of affective BCI.

Although the majority of applications for affective BCIs are currently in clinical research and therapy, the future will likely see an increase in non-clinical applications. The use of affective BCI will be more problematic in contexts where people do not have a firm grasp on what is going on and what the technology does to them. This is usually the case in the context of consumer products with its lack of rigorous procedures regarding informed consent. To prevent misuse and abuse, the workings of the affective BCI should be as transparent as possible to the user. Unfortunately, if the past is any indication, making the workings of devices and systems transparent to people is not very high on the list of priorities of technology companies. To the contrary, new opportunities for the manipulation of people, either by companies or governments, are one of the greatest worries regarding affective BCI. For example, emotional profile building could help to subtly emotionally influence people for economic or political gain. Due to the sensible nature of data about mental states, issues of mental privacy, cognitive liberty and mental integrity have to be raised with stronger emphasis.

Humans have created multiple means to influence their minds. The list includes alcohol, synthetic drugs and various kinds of emotionally engaging entertainment. So it is no stretch of the imagination that people will one day willingly submit to the direct or indirect stimulation of their affective states for various recreational purposes. For example, affective BCIs could be used to stimulate affective states in order to enhance the experience of movies, musical performances or video games. The novel way of monitoring, influencing and stimulating affective states with BCI could have a deep impact on individuals and on society. These new techniques could influence emotional self-regulation, autobiographic memory, sense of self, identity, autonomy, authenticity and responsibility ascriptions. Further, for some individuals the availability of affective BCIs may create social pressure to use this technology to alter their affective states.

Because of the highly likely expansion of affective BCI technology into several non-clinical areas, it is important to scrutinize the various ethical implications of this technology as early as possible. This paper is a step in this direction.

Footnotes

  1. 1.

    Neurocam relies on the headset developed by the company Neurosky (http://neurosky.com).

Notes

Acknowledgements

We would like to express our gratitude to two anonymous reviewers of this journal who helped us to improve the manuscript.

Funding

Orsolya Friedrich was funded by the German Federal Ministry of Education and Research (01GP1622A) within the ERA-NET Neuron program.

References

  1. Ali, M., Mosa, A. H., Al Machot, F., & Kyamakya, K. (2016). EEG-based emotion recognition approach for e-healthcare applications. In 2016 eighth international conference on ubiquitous and future networks (ICUFN) (pp. 946–950). Presented at the 2016 eighth international conference on ubiquitous and future networks (ICUFN), Vienna, Austria: IEEE.  https://doi.org/10.1109/icufn.2016.7536936.
  2. Andujar, M., Crawford, C. S., Nijholt, A., Jackson, F., & Gilbert, J. E. (2015). Artistic brain–computer interfaces: The expression and stimulation of the user’s affective state. Brain–Computer Interfaces, 2(2–3), 60–69.  https://doi.org/10.1080/2326263X.2015.1104613.Google Scholar
  3. Angie, A. D., Connelly, S., Waples, E. P., & Kligyte, V. (2011). The influence of discrete emotions on judgement and decision-making: A meta-analytic review. Cognition and Emotion, 25(8), 1393–1422.  https://doi.org/10.1080/02699931.2010.550751.Google Scholar
  4. Attiah, M. A., & Farah, M. J. (2014). Minds, motherboards, and money: Futurism and realism in the neuroethics of BCI technologies. Frontiers in Systems Neuroscience.  https://doi.org/10.3389/fnsys.2014.00086.Google Scholar
  5. Bagozzi, R. P., Belanche, D., Casaló, L. V., & Flavián, C. (2016). The role of anticipated emotions in purchase intentions. Psychology & Marketing, 33(8), 629–645.  https://doi.org/10.1002/mar.20905.Google Scholar
  6. Beard, M., Galliot, J., & Lynch, S. (2016). Soldier enhancement: Ethical risks and opportunities. Australian Army Journal, 13(1), 5–20.Google Scholar
  7. Bono, V., Biswas, D., Das, S., & Maharatna, K. (2016). Classifying human emotional states using wireless EEG based ERP and functional connectivity measures. In 2016 IEEE-EMBS international conference on biomedical and health informatics (BHI) (pp. 200–203). Presented at the IEEE-EMBS international conference on biomedical and health informatics (BHI). https://eprints.soton.ac.uk/390190/. Accessed October 18, 2018.
  8. Brouwer, A.-M., Hogervorst, M., Reuderink, B., van der Werf, Y., & van Erp, J. (2015). Physiological signals distinguish between reading emotional and non-emotional sections in a novel. Brain–Computer Interfaces, 2(2–3), 76–89.  https://doi.org/10.1080/2326263X.2015.1100037.Google Scholar
  9. Bublitz, J. C., & Merkel, R. (2014). Crimes against minds: On mental manipulations, harms and a human right to mental self-determination. Criminal Law and Philosophy, 8(1), 51–77.  https://doi.org/10.1007/s11572-012-9172-y.Google Scholar
  10. Chanel, G., Kierkels, J. J. M., Soleymani, M., & Pun, T. (2009). Short-term emotion assessment in a recall paradigm. International Journal of Human-Computer Studies, 67(8), 607–627.  https://doi.org/10.1016/j.ijhcs.2009.03.005.Google Scholar
  11. Charland, L. C. (2007). Technological reason and the regulation of emotion. In J. Phillips (Ed.), Philosophical perspectives on technology and psychiatry (pp. 55–70). Oxford: Oxford University Press.Google Scholar
  12. Clausen, J., Fetz, E., Donoghue, J., Ushiba, J., Spörhase, U., Chandler, J., et al. (2017). Help, hope, and hype: Ethical dimensions of neuroprosthetics. Science, 356(6345), 1338–1339.  https://doi.org/10.1126/science.aam7731.Google Scholar
  13. Coleman, N. V., & Williams, P. (2013). Feeling like my self: Emotion profiles and social identity. Journal of Consumer Research, 40(2), 203–222.  https://doi.org/10.1086/669483.Google Scholar
  14. Daly, I., Williams, D., Kirke, A., Weaver, J., Malik, A., Hwang, F., et al. (2016). Affective brain–computer music interfacing. Journal of Neural Engineering, 13(4), 046022.  https://doi.org/10.1088/1741-2560/13/4/046022.Google Scholar
  15. Eaton, J., Williams, D., & Miranda, E. (2015). The space between us: Evaluating a multi-user affective brain–computer music interface. Brain–Computer Interfaces, 2(2–3), 103–116.  https://doi.org/10.1080/2326263X.2015.1101922.Google Scholar
  16. Ehrlich, S., Guan, C., & Cheng, G. (2017). A closed-loop brain–computer music interface for continuous affective interaction. In 2017 international conference on orange technologies (ICOT) (pp. 176–179). Presented at the 2017 international conference on orange technologies (ICOT), Singapore: IEEE.  https://doi.org/10.1109/icot.2017.8336116.
  17. Fabes, R. A., & Martin, C. L. (1991). Gender and age stereotypes of emotionality. Personality and Social Psychology Bulletin, 17(5), 532–540.  https://doi.org/10.1177/0146167291175008.Google Scholar
  18. Fenton, A., & Alpert, S. (2008). Extending our view on using BCIs for locked-in syndrome. Neuroethics, 1(2), 119–132.  https://doi.org/10.1007/s12152-008-9014-8.Google Scholar
  19. Friedrich, O., Racine, E., Steinert, S., Pömsl, J., & Jox, R. J. (2018). An analysis of the impact of brain–computer interfaces on autonomy. Neuroethics.  https://doi.org/10.1007/s12152-018-9364-9.Google Scholar
  20. Ghosh, D., & Scott, B. (2018). Facebook’s new controversy shows how easily online political ads can manipulate you. Time. http://time.com/5197255/facebook-cambridge-analytica-donald-trump-ads-data/. Accessed October 28, 2018.
  21. Gilbert, F., Goddard, E., Viaña, J. N. M., Carter, A., & Horne, M. (2017). I miss being me: Phenomenological effects of deep brain stimulation. AJOB Neuroscience, 8(2), 96–109.  https://doi.org/10.1080/21507740.2017.1320319.Google Scholar
  22. Glannon, W. (2014). Ethical issues with brain–computer interfaces. Frontiers in Systems Neuroscience.  https://doi.org/10.3389/fnsys.2014.00136.Google Scholar
  23. Glannon, W. (2016). Ethical issues in neuroprosthetics. Journal of Neural Engineering, 13(2), 021002.  https://doi.org/10.1088/1741-2560/13/2/021002.Google Scholar
  24. Goering, S., Klein, E., Dougherty, D. D., & Widge, A. S. (2017). Staying in the loop: Relational agency and identity in next-generation DBS for psychiatry. AJOB Neuroscience, 8(2), 59–70.  https://doi.org/10.1080/21507740.2017.1320320.Google Scholar
  25. Goodwin, T. (2012). Why we should reject ‘Nudge’. Politics, 32(2), 85–92.  https://doi.org/10.1111/j.1467-9256.2012.01430.x.Google Scholar
  26. Grübler, G. (2011). Beyond the responsibility gap. Discussion note on responsibility and liability in the use of brain–computer interfaces. AI & Society, 26(4), 377–382.  https://doi.org/10.1007/s00146-011-0321-y.Google Scholar
  27. Hao, Y., Budd, J., Jackson, M. M., Sati, M., & Soni, S. (2014). A visual feedback design based on a brain–computer interface to assist users regulate their emotional state. In Proceedings of the extended abstracts of the 32nd annual ACM conference on Human factors in computing systems—CHI EA’14 (pp. 2491–2496). Presented at the extended abstracts of the 32nd annual ACM conference, Toronto, Ontario, Canada: ACM Press.  https://doi.org/10.1145/2559206.2581132.
  28. Harari, Y. N. (2018). Why technology favors tyranny. The Atlantic, 64–70.Google Scholar
  29. Heath, T. (2016). This employee ID badge monitors and listens to you at work—Except in the bathroom. The Washington Post. https://www.washingtonpost.com/news/business/wp/2016/09/07/this-employee-badge-knows-not-only-where-you-are-but-whether-you-are-talking-to-your-co-workers/?utm_term=.54fb86eba866. Accessed October 18, 2018.
  30. Hildt, E. (2015). What will this do to me and my brain? Ethical issues in brain-to-brain interfacing. Frontiers in Systems Neuroscience.  https://doi.org/10.3389/fnsys.2015.00017.Google Scholar
  31. Holland, A. C., & Kensinger, E. A. (2010). Emotion and autobiographical memory. Physics of Life Reviews, 7(1), 88–131.  https://doi.org/10.1016/j.plrev.2010.01.006.Google Scholar
  32. Howell, A. (2015). Resilience, war, and austerity: The ethics of military human enhancement and the politics of data. Security Dialogue, 46(1), 15–31.  https://doi.org/10.1177/0967010614551040.Google Scholar
  33. Ienca, M., & Haselager, P. (2016). Hacking the brain: Brain–computer interfacing technology and the ethics of neurosecurity. Ethics and Information Technology, 18(2), 117–129.  https://doi.org/10.1007/s10676-016-9398-9.Google Scholar
  34. Jebari, K. (2013). Brain machine interface and human enhancement—An ethical review. Neuroethics, 6(3), 617–625.  https://doi.org/10.1007/s12152-012-9176-2.Google Scholar
  35. Johnston, S. J., Boehm, S. G., Healy, D., Goebel, R., & Linden, D. E. J. (2010). Neurofeedback: A promising tool for the self-regulation of emotion networks. NeuroImage, 49(1), 1066–1072.  https://doi.org/10.1016/j.neuroimage.2009.07.056.Google Scholar
  36. Kashihara, K. (2014). A brain–computer interface for potential non-verbal facial communication based on EEG signals related to specific emotions. Frontiers in Neuroscience.  https://doi.org/10.3389/fnins.2014.00244.Google Scholar
  37. Klein, E. (2016). Informed consent in implantable BCI research: Identifying risks and exploring meaning. Science and Engineering Ethics, 22(5), 1299–1317.  https://doi.org/10.1007/s11948-015-9712-7.Google Scholar
  38. Klein, E., Goering, S., Gagne, J., Shea, C. V., Franklin, R., Zorowitz, S., et al. (2016). Brain–computer interface-based control of closed-loop brain stimulation: attitudes and ethical considerations. Brain–Computer Interfaces, 3(3), 140–148.  https://doi.org/10.1080/2326263X.2016.1207497.Google Scholar
  39. Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. Proceedings of the National Academy of Sciences, 111(24), 8788–8790.  https://doi.org/10.1073/pnas.1320040111.Google Scholar
  40. Lanteaume, L., Khalfa, S., Regis, J., Marquis, P., Chauvel, P., & Bartolomei, F. (2007). Emotion induction after direct intracerebral stimulations of human amygdala. Cerebral Cortex, 17(6), 1307–1313.  https://doi.org/10.1093/cercor/bhl041.Google Scholar
  41. Lavazza, A. (2018). Freedom of thought and mental integrity: The moral requirements for any neural prosthesis. Frontiers in Neuroscience.  https://doi.org/10.3389/fnins.2018.00082.Google Scholar
  42. Lee, Y.-Y., & Hsieh, S. (2014). Classifying different emotional states by means of EEG-based functional connectivity patterns. PLoS ONE, 9(4), e95415.  https://doi.org/10.1371/journal.pone.0095415.Google Scholar
  43. Levin, S. (2017). Facebook told advertisers it can identify teens feeling “insecure” and “worthless.” The Guardian. https://www.theguardian.com/technology/2017/may/01/facebook-advertising-data-insecure-teens. Accessed October 18, 2018.
  44. Lin, Y.-P., Jung, T.-P., Wang, Y., & Onton, J. (2015). Toward affective brain–computer interface: Fundamentals and analysis of EEG-based emotion classification. In A. Konar & A. Chakraborty (Eds.), Emotion recognition (pp. 315–341). Hoboken, NJ: Wiley.Google Scholar
  45. Lucivero, F., & Tamburrini, G. (2008). Ethical monitoring of brain-machine interfaces: A note on personal identity and autonomy. AI & Society, 22(3), 449–460.  https://doi.org/10.1007/s00146-007-0146-x.Google Scholar
  46. Lupton, D. (2015). Quantified sex: A critical analysis of sexual and reproductive self-tracking using apps. Culture, Health & Sexuality, 17(4), 440–453.  https://doi.org/10.1080/13691058.2014.920528.Google Scholar
  47. Marsh, S. (2018). Neurotechnology, Elon Musk and the goal of human enhancement. The Guardian. https://www.theguardian.com/technology/2018/jan/01/elon-musk-neurotechnology-human-enhancement-brain–computer-interfaces. Accessed October 22, 2018.
  48. McCullagh, P., Lightbody, G., Zygierewicz, J., & Kernohan, W. G. (2014). Ethical challenges associated with the development and deployment of brain computer interface technology. Neuroethics, 7(2), 109–122.  https://doi.org/10.1007/s12152-013-9188-6.Google Scholar
  49. Mecacci, G., & Haselager, P. (2017). Identifying criteria for the evaluation of the implications of brain reading for mental privacy. Science and Engineering Ethics.  https://doi.org/10.1007/s11948-017-0003-3.Google Scholar
  50. Mellers, B. A., & McGraw, A. P. (2001). Anticipated emotions as guides to choice. Current Directions in Psychological Science, 10(6), 210–214.  https://doi.org/10.1111/1467-8721.00151.Google Scholar
  51. Moreno, J. D. (2012). Mind wars: Brain science and the military in the twenty-first century. New York: Bellevue Literary Press.Google Scholar
  52. Mühl, C., Allison, B., Nijholt, A., & Chanel, G. (2014). A survey of affective brain computer interfaces: Principles, state-of-the-art, and challenges. Brain–Computer Interfaces, 1(2), 66–84.  https://doi.org/10.1080/2326263X.2014.912881.Google Scholar
  53. Myrden, A., & Chau, T. (2017). A passive EEG-BCI for single-trial detection of changes in mental state. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 25(4), 345–356.  https://doi.org/10.1109/TNSRE.2016.2641956.Google Scholar
  54. Neurowear. (2018). Projects, http://neurowear.com/projects/. Accessed October 22, 2018.
  55. Nijboer, F., Morin, F. O., Carmien, S. P., Koene, R. A., Leon, E., & Hoffmann, U. (2009). Affective brain–computer interfaces: Psychophysiological markers of emotion in healthy persons and in persons with amyotrophic lateral sclerosis. In 2009 3rd international conference on affective computing and intelligent interaction and workshops (pp. 1–11). Presented at the 2009 3rd international conference on affective computing and intelligent interaction and workshops (ACII 2009), Amsterdam, Netherlands: IEEE.  https://doi.org/10.1109/acii.2009.5349479.
  56. O’Brolchain, F., & Gordijn, B. (2014). Brain–computer interfaces and user responsibility. In G. Grübler & E. Hildt (Eds.), Brain–computer-interfaces in their ethical, social and cultural contexts (pp. 163–182). Dordrecht: Springer.  https://doi.org/10.1007/978-94-017-8996-7_14.Google Scholar
  57. Pan, J., Li, Y., & Wang, J. (2016). An EEG-Based brain–computer interface for emotion recognition. In 2016 international joint conference on neural networks (IJCNN) (pp. 2063–2067). Presented at the 2016 international joint conference on neural networks (IJCNN), Vancouver, BC, Canada: IEEE.  https://doi.org/10.1109/ijcnn.2016.7727453.
  58. Picard, R. W. (2000). Affective computing. Cambridge, MA: The MIT Press.Google Scholar
  59. Polonski, V. W. (2017). How artificial intelligence conquered democracy. Independent. https://www.independent.co.uk/news/long_reads/artificial-intelligence-democracy-elections-trump-brexit-clinton-a7883911.html. Accessed October 28, 2018.
  60. Prebble, S. C., Addis, D. R., & Tippett, L. J. (2013). Autobiographical memory and sense of self. Psychological Bulletin, 139(4), 815–840.  https://doi.org/10.1037/a0030146.Google Scholar
  61. Reuderink, B., Mühl, C., & Poel, M. (2013). Valence, arousal and dominance in the EEG during game play. International Journal of Autonomous and Adaptive Communications Systems, 6(1), 45.  https://doi.org/10.1504/IJAACS.2013.050691.Google Scholar
  62. Roberts, T. (2015). Emotional regulation and responsibility. Ethical Theory and Moral Practice, 18(3), 487–500.  https://doi.org/10.1007/s10677-014-9535-7.Google Scholar
  63. Roeser, S., Alfano, V., & Nevejan, C. (2018). The role of art in emotional-moral reflection on risky and controversial technologies: The case of BNCI. Ethical Theory and Moral Practice, 21(2), 275–289.  https://doi.org/10.1007/s10677-018-9878-6.Google Scholar
  64. Roeser, S., & Todd, C. S. (Eds.). (2014). Emotion and value (1st ed.). Oxford: Oxford University Press.Google Scholar
  65. Schechtman, M. (1996). The constitution of selves. Ithaca, NY: Cornell University Press.Google Scholar
  66. Schechtman, M. (2005). Personal identity and the past. Philosophy, Psychiatry, and Psychology, 12(1), 9–22.  https://doi.org/10.1353/ppp.2005.0032.Google Scholar
  67. Schermer, M. (2009). The mind and the machine. On the conceptual and moral implications of brain-machine interaction. NanoEthics, 3(3), 217–230.  https://doi.org/10.1007/s11569-009-0076-9.Google Scholar
  68. Sharon, T. (2017). Self-tracking for health and the quantified self: Re-articulating autonomy, solidarity, and authenticity in an age of personalized healthcare. Philosophy & Technology, 30(1), 93–121.  https://doi.org/10.1007/s13347-016-0215-5.Google Scholar
  69. Steinert, S., Bublitz, C., Jox, R., & Friedrich, O. (2018). Doing things with thoughts: Brain–computer interfaces and disembodied agency. Philosophy & Technology.  https://doi.org/10.1007/s13347-018-0308-4.Google Scholar
  70. Sunstein, C. R. (2015). The ethics of nudging. SSRN Electronic Journal.  https://doi.org/10.2139/ssrn.2526341.Google Scholar
  71. Tamburrini, G. (2009). Brain to computer communication: Ethical perspectives on interaction models. Neuroethics, 2(3), 137–149.  https://doi.org/10.1007/s12152-009-9040-1.Google Scholar
  72. Tennison, M. N., & Moreno, J. D. (2012). Neuroscience, ethics, and national security: The state of the art. PLoS Biology, 10(3), e1001289.  https://doi.org/10.1371/journal.pbio.1001289.Google Scholar
  73. Vlek, R. J., Steines, D., Szibbo, D., Kübler, A., Schneider, M.-J., Haselager, P., et al. (2012). Ethical issues in brain–computer interface research, development, and dissemination. Journal of Neurologic Physical Therapy, 36(2), 94–99.  https://doi.org/10.1097/NPT.0b013e31825064cc.Google Scholar
  74. Weinberger, S., & Greenbaum, D. (2016). Are BMI prosthetics uncontrollable Frankensteinian monsters? Brain–Computer Interfaces, 3(3), 149–155.  https://doi.org/10.1080/2326263X.2016.1207495.Google Scholar
  75. Widge, A. S., Dougherty, D. D., & Moritz, C. T. (2014). Affective brain–computer interfaces as enabling technology for responsive psychiatric stimulation. Brain–Computer Interfaces, 1(2), 126–136.  https://doi.org/10.1080/2326263X.2014.912885.Google Scholar
  76. Wilson, T. D., & Gilbert, D. T. (2005). Affective forecasting: Knowing what to want. Current Directions in Psychological Science, 14(3), 131–134.  https://doi.org/10.1111/j.0963-7214.2005.00355.x.Google Scholar
  77. Wu, S., Xu, X., Shu, L., & Hu, B. (2017). Estimation of valence of emotion using two frontal EEG channels. In 2017 IEEE international conference on bioinformatics and biomedicine (BIBM) (pp. 1127–1130). Presented at the 2017 IEEE international conference on bioinformatics and biomedicine (BIBM), Kansas City, MO: IEEE.  https://doi.org/10.1109/bibm.2017.8217815.
  78. Yuste, R., Goering, S., Arcas, B. A. Y., Bi, G., Carmena, J. M., Carter, A., et al. (2017). Four ethical priorities for neurotechnologies and AI. Nature, 551(7679), 159–163.  https://doi.org/10.1038/551159a.Google Scholar

Copyright information

© The Author(s) 2019

Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Values, Technology and Innovation, Faculty of Technology, Policy and ManagementDelft University of TechnologyDelftThe Netherlands
  2. 2.Institute of Ethics, History and Theory of MedicineLudwig-Maximilians-Universität MünchenMunichGermany

Personalised recommendations