Keywords

1 Introduction

Brain Computer Interfaces (BCIs) offer a way of communicating electrical activity produced by a user’s thoughts and intents to a machine. Different actions (and specifically action intents – which is the intention or desire to stimulate an action) are associated with different patterns of electrical activity. Affordable brain-computer interface type technology has only recently become accessible to users with limited and experimental software. The effect of a brain computer interface is primarily to introduce another level of multimodality to a user. In this work, we utilize an affordable, non-intrusive BCI type device to investigate the efficacy of allowing motion restricted elementary control over a document reader. By detecting signals from the Emotiv Epoc and mapping the cognitive functions to physical actions, users are able to navigate through a document without any movement from their limbs. We situate the reader with background and related work. We then present the prototype and the user study description followed by the results and discussion.

2 Background and Related Work

A brain computer interface (BCI) device provides a new channel for communication and control [1, 2]. Traditionally BCI related research is based on expensive and complex prototypes such as the Graz Brain-Computer Interface II [3, 4]. The cost for conducting research using EEG based BCI devices has generally been high and using such equipment often requires the assistance of specially trained experts [5, 6]. During the last decade, more affordable/consumer BCI devices have become available and they have begun to be used for academic research purposes. These include for example the Neurosky Mindset Headset, used to enhance cognitive functions and increase satisfaction within a game called ‘Neuro Wander’ [7]; the Myndplay Brainband, used to elicit different mind states of participants viewing emotional videos [8]; and the OpenBCI system, used for investigating optimal electrode placement for motor imagery applications [9]. The Emotiv Epoc device is inexpensive, commercially available and has been used extensively in the past for similar types of research with good results [10]. Stytsenko et al. reported that Epoc was “able to acquire real EEG data which is comparable to the one acquired by using conservative EEG devices” [11]. It is noted that Emotiv measures EEG signal and ambiguous EMG at the same time [11]. However, in this work we aim to illustrate the use of the Emotive Epoc to operate the document navigation software and therefore focus on the interactivity capability rather than the clarity of the signal.

3 System Description

The prototype system architecture consists of four key components (See Fig. 1: Left). These are: (a) Emotiv Epoc Headset (b) EmoKey Software (c) Epoc Control Panel (d) MindReader Software. The Emotiv Epoc Headset is used to receive raw data from the individual and send the signal to the Epoc Control Panel. The BCI contains 14 channels. It uses a sequential sampling rate of 2048 Hz, a bandwidth of 0.2–43 Hz, digital notch filters (at 50 Hz and 60 Hz) and a dynamic range of 8400 μV(pp). The P300 evoked potential causes a 300 ms delay after a relevant sensory stimulus. The Control Panel then clears the noise and separates distinct and recognizable pattern actions such as “think push”, “think pull”, “think lift”, “smile” and “Clench”. Upon detecting one of the selected patterns that constitute a specific action, the control panel then triggers the (customized) Emokey Software. Emokey Software translates the user actions to physical keyboard keystrokes. In turn, keystrokes are detected by the MindReader Software which instigates navigational actions on the document reader.

Fig. 1.
figure 1

(left) Key components to operate MindReader system (centre) MindReader beta version (right) User interacting with MindReader

Our prototype software MindReader is written in C# through WinForms under the.NET framework. The specific release (Beta V.01) presented in the paper and used for the user testing comprises of a document reader that loads a series of documents that have been translated to images from PDFs. In this work, the MindReader software produces a two page reading format (See Fig. 1: Centre) for the user but can also be adapted to a single page view. The software in its current beta form allows for user testing controls; namely, the investigator can imitate a “start task” trigger which will take readings such as location, navigation speed and user intent (actions performed). The software is also able to use simple key strokes or buttons to move between the pages but these are only visible as an addition for future work involving the integration of eye-tracking equipment with BCI devices. Users are able to navigate forward and backward one page at a time, controlled by the Emotiv Epoc.

4 Testing

To test the prototype we recruited 14 participants; 2 of which acted as pilot subjects. The 12 main participants (5 female, 7 male) were given a bill of rights and consent form before the testing. The participants were then given a short description of the experiment and each was given a choice of actions that they could choose from in order to calibrate the forward and backward actions. Specifically, participants could choose from “Think Push”, “Think Pull”, “Lift”, “Clench” and “Smile”. Once the participant had chosen a set of actions, they would then be calibrated to the specific individual, a process that took no more than 5 min per participant (See Fig. 1: Right). Once the BCI connection was established and calibrated, our prototype software, MindReader was loaded with a document containing 16 pages in a two page view (book style view) and the participants were then given a set of tasks to complete. Usually interfaces are tested with the participants using think-aloud comments as to what they are thinking and what is happening. For the specific evaluation, no think–aloud comments were permitted so as to not influence and skew the results. After the tasks a semi-structured interview was conducted for qualitative feedback. The tasks were (1) Navigate to the Contents Page – this was page 1 (2) Navigate back to the beginning – page 0 - (3) Navigate to Page 6 (4) Navigate back to the beginning (5) Navigate to Page 12 (6) Navigate back to the beginning (7) Navigate to the last Page (8) Navigate back to the beginning (9) Answer the question: How many babies do female goats give birth to? - the answer is on page- (10) Feel free to browse the magazine (for qualitative feedback). Table 1 presents an overview of the tasks and the time taken, as well as the errors that were made in taking the tasks.

Table 1. Data overview (including time and errors)

5 Conclusions and Future Work

We present a prototype of a document reader using affordable BCI to control elementary functions. A pilot test with 14 participants provided feedback of the efficacy of using this affordable BCI to control the elementary navigation of a document reader and acted as a cycle in our user centered design process. Our experiment results demonstrate that participants can navigate through the pages of a document reader in order, although minimal calibration is needed and errors were present (such as overshooting pages). We were able to also identify some potential limitations, for example, increased number of pages in a document increases the navigational effort and fatigue and increases the error rate (a limitation that needs to be addressed using improved interaction). Specifically, the future work will aim to (a) reduce errors through improving the software algorithms and by testing further hardware variations; (b) answer behavioral and interactivity questions such as “Is getting further in the document of proportional effort to getting to an earlier page?” We also plan to include further compliments to the setup that will give richer interaction to the users, such as pairing the BCI with an eye-tracker for gaze fixation recognition.