Keywords

1 Introduction

1.1 BCIs for DOC Patients

During most of the history of brain-computer interface (BCI) research, the main goal was to develop new tools that could provide communication for patients who could not communicate otherwise due to severe motor disabilities [13]. Patients with late-stage amyotrophic lateral sclerosis (ALS, also called Lou Gehrig’s disease) were a classic target user group, since they often have little or no reliable control of voluntary muscle activity, but do exhibit signs of consciousness and the desire to communicate. While helping such patients remains important, some newer review/commentary articles have identified new directions to extend BCI technology to help different patient groups, including persons diagnosed with disorders of consciousness (DOCs) [24].

Recently, several groups have presented work that extends BCI technology to DOC patients. Results have shown that 17–42 % of these patients do exhibit activity reflecting that they are conscious, and may even be able to communicate using a BCI, despite a medical diagnoses suggesting communication is impossible [510]. Thus, in addition to providing communication, BCI technology could be adapted to a wholly new capability: assessment of consciousness. This new use of BCI technology, not just for communication but also to determine whether someone is mentally capable of communication, could have a tremendous impact on patients and their families.

This new research direction requires innovative approaches to human-computer interaction (HCI). Conventional BCIs assume that target users are conscious and have sufficient cognitive function to communicate. Thus, there has been very little work focused on using BCI technology for assessment of consciousness. BCIs also usually assume intact visual function. It is often unknown whether DOC patients have intact vision, therefore, BCI’s for this population should provide specialized task instructions and new paradigms working independently from vision. Furthermore, user-friendly interaction is even more important than in other BCIs, since confusing or ambiguous instructions could hamper a patient’s only chance at re-assessment and, if possible, communication.

The new mindBEAGLE system was developed to meet these needs. Last year, we presented a paper at the HCI International conference series that introduced our new mindBEAGLE system, its unique HCI protocols, and results from initial evaluation. The current paper reviews the mindBEAGLE approach (including hardware and software), presents new results from patients, and concludes with discussion of future directions and unique HCI-related issues with DOsC patients.

1.2 MindBEAGLE

The mindBEAGLE system shown in Fig. 1 uses auditory stimuli to present task instructions, and auditory or vibrotactile stimuli to present cues and feedback. The left panel shows the hardware components; a laptop running the mindBEAGLE software, an electrode cap, amplifier, earbuds, and a vibrotactile stimulator. Although this system is being sold and used by research partners, it is still in development, primarily to improve portability without sacrificing signal quality in real-world settings.

Fig. 1.
figure 1

The left panel shows the mindBEAGLE system. The right panel shows a close-up of the vibrotactile stimulators attached to the wrists. The right panel also shows the optional anti-static grounding straps. Usually, mindBEAGLE obtains adequate quality signals without any grounding strap, however in rare settings they can be used to reduce noise.

The mindBEAGLE system can use four general approaches: based on motor imagery (MI); auditory P300 s; vibrotactile P300 s with 2 stimulators (placed on the left and right wrists); and vibrotactile P300 s with three stimulators (placed on the wrists and one ankle). These four approaches are available across three usage modes: assessment, quick test and communication. We are currently developing two additional modes, which provide rehabilitation and prediction.

Assessment Mode. The goals of this mode are: (1) to determine whether a patient can produce reliable, distinct EEG signals that are adequate for BCI-based communication; and (2) to identify which approach is most effective for that patient. For example, with MI, auditory cues instruct the subject to imagine left or right hand movement. If the resulting EEG activity indicates successful task performance, then the patient may be able to communicate using a BCI based on MI.

Quick Test Mode. This mode is designed to provide a quicker means of checking for indicators of consciousness than assessment mode. It is available with the auditory approach and the vibrotactile approach with two stimulators. The quick tests are essentially the same as the tests in assessment mode.

Communication Mode. Communication mode can currently provide YES/NO communication through left vs. right hand motor imagery or through left wrist vs. right wrist vs. either ankle vibrotactile stimuli. In communication mode, the experimenter can pre-record questions that mindBEAGLE presents within a synchronous BCI paradigm, such as “Do you want us to change your bed position?” or “Do you want to change the temperature?” Since the communication is binary, additional questions are often needed to gather more information, such as whether the temperature should be increased or decreased, and by how much.

Rehabilitation Mode. This mode will allow cognitive, sensory, and/or motor rehabilitation. Because of the challenges of conducting rehabilitation with this patient population, and the many types of rehabilitation they may seek, developing and improving rehabilitation mode should be an ongoing effort over many years. This mode may leverage some developments from g.tec’s RecoveriX system, which is currently designed to provide motor rehabilitation for persons affected by stroke.

Prediction Mode. If rehabilitation is available, patients may want to learn about the most likely outcome, and how much training is needed to achieve optimal rehabilitation. Prediction mode will help predict rehabilitation outcomes, which could not only help patients consider the effort involved but might also facilitate decisions relating to long-term care, insurance, expected costs, and other factors. Prediction mode will also identify which parameters (such as target frequencies for MI) should be most effective, which could be used to automatically update parameters used in the classification mode. Unlike the preceding three modes, prediction mode may not entail distinct runs or sessions. Instead, prediction mode may operate by re-analyzing data collected during other modes.

2 Methods

2.1 Subjects

We present data from two patients who exhibited promising EEG signals during mindBEAGLE assessment. Both patients had been diagnosed as minimally conscious. Persons in the minimally conscious state (MCS) are considered aware of themselves, emotions and their environment in a fluctuating manner [11]. Medical experts typically attempt to communicate, but communication on the behavioral level is impossible, perhaps because of motor disabilities and aphasia. Because these patients cannot provide informed consent, consent was obtained through a legally authorized person, and all procedures were approved by an ethical review board at the relevant hospital partner.

Patient 1 was a 61-year-old male. Patient 2 was a 40-year-old female, and a patient at the Sart Tilman Liège University Hospital in Liège, Belgium.

2.2 Procedure

Before working with each patient, we tried to identify autobiographical questions relevant to each patient. This could involve reviewing the anamnesis and/or speaking with family members. We developed YES/NO questions such as “Is your father named Jose?” or “Were you born in Austria?”

Each recording session began with mounting a cap on the patient, affixing the vibrotactile stimulators, and inserting earbuds into the ears. Each session involved a variable number of “assessment” runs. We now use all four of the approaches presented above. However, during the data collection presented in this paper, we did not use all four approaches, since they were still in development. In all tasks, chance accuracy was 12.5 %. Patients had a short break after each run, and each session lasted 45-60 min. Patient 1 exhibited the indicators of awareness shown in the following section during the third recording session. Patient 2 exhibited these indicators in the first session. A simulation of a session with the mindBEAGLE is shown in Fig. 2.

Fig. 2.
figure 2

This picture simulates a physician working with a DOC patient. The laptop in the foreground presents the user’s free-running EEG (top left), brain map (top right), and evoked potential activity (bottom right).

3 Results

3.1 Patient 1

Figure 3 shows results from Patient 1, who participated in AEP and MI assessments. The MI assessments did not reveal indicators of consciousness, which may stem from challenges in MI BCIs that have been widely recognized in the BCI community.

Fig. 3.
figure 3

Data collected from Patient 1 during the AEP paradigm. The left panel shows how accuracy increases to 100 % with sufficient repetitions. This is typical of healthy users as well; communication with fewer trials is challenging. The right panel shows the topographic distribution of brain activity elicited by the task, which was selective attention to one of eight tones. The green shaded areas indicate areas with a statistically significant difference. Six channels show a P300 to target tones only, reflecting that the user was able to silently count target tones while ignoring other stimuli. This confirms that the user could process stimuli and understand instructions. Thus, this patient might be able to communicate. However, patients who can process auditory and/or vibrotactile stimuli and follow the basic instructions in the assessment paradigm might not be willing and/or able to communicate.

3.2 Patient 2

Figure 4 shows results from Patient 2, who participated in AEP, VT2, and VT3 assessments.

Fig. 4.
figure 4

Data collected from Patient 2 across the auditory evoked potential (AEP) paradigm (left panels), vibrotactile stimuli with two stimulators (VT2, middle panels), and vibrotactile stimuli with three stimulators (VT3, right panels). All three of these protocols led to clear differences in the evoked EEG (bottom panels), and the classifier could successfully identify the target (top panels). The VT2 protocol was especially effective, leading to 100 % classification accuracy after only three trials. The VT3 protocol also led to high accuracy, suggesting that the patient possibly could communicate. (Color figure online)

4 Discussion

4.1 Results Summary

Based on the work presented above, and very new data still being analyzed, we have four observations that are generally consistent with other recent work. First, in some users, the P300 BCI approaches relied largely on non-P300 activity. Second, the P300 BCI approaches were generally more effective than the MI approach. This, patients exhibited substantial within- and across- subject variability. Fourth, effective BCI performance was possible for persons with severe disabilities, but often required more data than with healthy users.

4.2 Future Directions

Because of the novelty of this research field and patient group, we see considerable opportunity for new research. Our highest priority is validation. We need to evaluate mindBEAGLE and emerging technologies with many more patients. The challenges with the MI approach can only be addressed through further research, along with several other future directions.

From an HCI perspective, DOC patients introduce several challenges that merit future study. Visual stimuli, which are vital in most interfaces, cannot provide reliable interaction. DOC patients have cognitive and/or attentional deficits that may create difficulty understanding the tasks, remembering instructions, maintaining attention to task demands, or focusing on the different cues. They may also have sensory deficits, such as difficulty hearing or feeling, which could render auditory and/or vibrotactile modalities ineffective. These challenges have long been recognized within the HCI community, such as within the design of assistive technologies (ATs) designed for persons with disabilities [12]. However, DOC patients introduce another challenge that is unique within HCI. Many of these patients fade in and out of consciousness, with no a priori way to determine whether they are in an up or down state before beginning a session. Thus, we need to attempt assessment several times, since assessments that do not reveal indicators of consciousness may simply mean that we need to try again. Furthermore, we currently have no way to know how many assessments are appropriate before concluding that a patient could never communicate. Even if many prior assessments were unsuccessful, there is always a chance that the patient could pass an assessment if medical or research experts try one more time. This may create a heart wrenching dilemma: given the limited resources in a medical environment, when is it appropriate to give up with one patient and move on to the next one to possibly change his/her life?

This is a serious challenge, and we currently have no solution. Behaviorally, there is no known way to determine when patients may be sufficiently aware for a mindBEAGLE session. It may be possible to use EEG or other measures to identify indicators of awareness more quickly than we can right now, which is an important future direction we are now exploring. In some cases, administering medications may increase the chance that a patient will be aware prior to a session, although this solution introduces further ethical challenges.

Another unique challenge involves “BCI inefficiency”. This phenomenon, also called “BCI illiteracy”, means that a user is unable to use a particular BCI approach. A BCI used for communication should ideally provide communication for all users who need it. However, this is not always possible, just like conventional interfaces; some people cannot use keyboard or mice due to motor disabilities or other reasons. Moreover, given this unique patient group, it is not realistic to expect high literacy. Many patients who are diagnosed as unable to communicate really are unable to communicate. If they are not able to use a BCI for communication, this may not reflect a failure by the BCI designers. Rather, it may indicate that the BCI is performing as well as can be expected for a patient who is indeed unable to reliably produce EEG differences. Many patients diagnosed with DOC are indeed unable to understand instructions, maintain attention, develop and implement goal-directed behavior required to answer questions, etc. These patients are presumably unable to communicate with any known technology, and could not be helped without major advancements in medical technology. Nonetheless, we are exploring improved protocols and signal processing approaches that could improve BCI literacy. Additional data from DOC patients should help us identify relevant EPs and other EEG characteristics could lead to improved classification accuracy and literacy.

Furthermore, the MI approach usually requires training, and has been recognized within the BCI community as more prone to BCI inefficiency [13, 14]. While most healthy people can attain effective communication with the MI approach, MI BCIs are not able to detect reliably discriminable brain signals in a minority of users. The training requirements in most MI BCI approaches are more daunting for DOC patients for at least three reasons. First, training sessions are pointless if the patient cannot understand instructions and follow the task. With this target patient group, a P300 BCI may be needed to assess these mental capabilities. However, some healthy people can attain effective MI-based communication with fairly brief training [15], even with a limited electrode montage [16]. Second, any session with this patient group is much less casual than with healthy persons. Third, it is unknown whether this target population is capable of producing clear differences between left and right hand MI that a BCI could detect. Research has shown that persons with late-stage ALS can produce MI signals adequate for BCI control [17]; therefore, the inability to produce certain movements, even for an extended time, does not necessarily prevent people from imagining movement in a way that an MI BCI could detect. Patients with DOC have different disabilities than late stage ALS patients, and thus this question remains open for further study.

So far, we have focused on EEG activity. Within the BCI community, there has been some attention to “hybrid” BCIs [18] that might combine EEG-based signals with other tools to image the brain (such as fMRI) or other biosignals (such as activity from the heart, eyes, or muscles). These methods may be less promising as tools to provide communication for persons diagnosed with DOC. Aside from high cost and low portability, fMRI imaging requires a very strong magnetic field that is difficult or impossible to use with the electronic and metal devices that DOC patients need. Deriving information from other biosignals can be helpful for diagnosis, but is not especially helpful when trying to provide communication to persons with little or no ability to voluntarily modulate these signals.

Like many BCI researchers, we are interested in improving software, including better HCI-based and user-centered software to interact with users. In addition to the unique challenges of working with DOC patients, we also need to present information to system operators in an engaging, informative manner. Figure 2 shows how system operators can view the EEG, brain maps, and ERPs in real-time. We need additional testing to determine whether this is indeed the most informative and effective way to present information to system operators. We also need to consider differences among system operators, who might be medical doctors, nurses, research professors, postdocs, graduate students, technicians, or other people with different backgrounds and skills. The most effective interface components may differ for different users based on their expertise.

Hardware development is another important future direction. Figures 1 and 2 show systems that used wired, active, gel-based electrodes embedded in a cap. A wireless system might be easier to use while eliminating the risk of snagging cables on equipment. Dry electrodes might provide adequate signal quality while reducing preparation and cleanup time. New electrode montages and mounting hardware could enable electrodes that do not require a cap, such as electrodes embedded in headphones, headbands, or other head-mounted devices.

In summary, the results obtained so far from our group and other groups have clearly shown that some patients diagnosed with DOC do exhibit indicators of consciousness with BCI technology and might be able to communicate, even though they do not exhibit indicators of consciousness on a behavioral level. On the other hand, the majority of DOC patients do not exhibit such indicators with BCIs. This is still a new research direction with many unanswered questions and very serious challenges, and we do not know which approaches will be most effective. Drawing on lessons from BCI research, the most effective approach will probably vary across different users, which underscores the importance of providing different approaches through different sensory modalities to increase the chance that at least one of them will be effective. We are excited about the prospect of helping some people within this unique patient population, and both hope and expect that future research will lead to more universal and effective solutions.