Sequence-based manipulation of robotic arm control in brain machine interface

  • Justin Kilmarx
  • Reza Abiri
  • Soheil Borhani
  • Yang Jiang
  • Xiaopeng Zhao
Regular Paper


In brain machine interfaces (BMI), the brain activities are recorded by invasive or noninvasive approaches and translated into command signals to control external prosthetic devices such as a computer cursor, a wheelchair, or a robotic arm. Although many studies confirmed the capability of BMI systems in controlling multi degrees-of-freedom (DOF) prosthetic devices using invasive approaches, BMI research using noninvasive paradigms is still in its infancy. In this paper, a new robotic BMI platform has been developed using electroencephalography (EEG) technology to control a 6-DOF robotic arm. EEG signals were collected from the scalp using a wireless headset exploiting a new fast-training paradigm named as “imagined body kinematics”. A regression model was employed to decode the kinematic parameters from the EEG signals. The subjects were instructed to voluntarily control a virtual cursor in multiple trials to hit different pre-programmed targets on a screen in an optimized sequence. The command signals generated from hitting the targets during trials were applied to control sequential movements of the robotic arm in a discrete manner to manipulate an object in a two-dimensional workspace. This approach is derived from a basic shared control strategy where the robotic arm is responsible for carrying out complex maneuvers based on the user’s intention. Our proposed BMI platform yielded a high success rate of 70% in a sequence-based manipulation task after only a short time of training (10 min). The developed platform serves as a proof-of-concept for EEG-based neuro-prosthetic devices.


Brain machine interface EEG Fast-training Robotic arm Manipulation task 

1 Introduction

There has been a large growth in the interest of developing an interface between human and robot interactions. Recently, brainwaves have emerged as a common and effective way to facilitate this connection, known as a brain machine interface (BMI). The brain activities could be recorded through various invasive and noninvasive neuroimaging methods (Nicolas-Alonso and Gomez-Gil 2012).

As one of the prominent applications of BMI systems, many researchers investigated controlling a neuroprosthetic arm using brain activities for disabled people [see reviews in (Bhuiyan et al. 2015; Wright et al. 2016; Schultz and Kuiken 2011; Miranda et al. 2015)]. By employing invasive approaches, many studies achieved control of robotic arms using primate subjects (Taylor et al. 2002; Velliste et al. 2008; Carmena et al. 2003; Gilja et al. 2012) and human subjects (Hochberg et al. 2012; Vogel et al. 2015; Fifer et al. 2014; Collinger et al. 2013). However, invasive BMIs are subject to potential risks associated with surgeries and implantations. To control a physical device using noninvasive BMIs is at the beginning stage. Among noninvasive neuroimaging methods, electroencephalography (EEG) is the most common and popular method due to its low-cost and portability (Nicolas-Alonso and Gomez-Gil 2012). The existence of EEG activity during imaginary movement of a lost-limb was first confirmed in 1971 (Nirenberg et al. 1971). In 1999, Guger et al. (1999) developed the first BMI-controlled prosthesis by employing EEG signals. They employed sensorimotor rhythm (SMR) of right/left hand imaginary movement in their model to control actions such as grasping in a simple robotic hand. SMR consists of mu rhythm (8–12 Hz) and beta rhythm (18–26 Hz) collected from the sensorimotor cortex (Yuan and He 2014; He et al. 2015).

SMR has been one of the most popular paradigms in the EEG-based BMI category that takes advantage of motor imagery to control prosthetic devices (McFarland et al. 2010; LaFleur et al. 2013). Baxter et al. (2013) had relative success in using EEG signals to control a robotic arm in two dimensions. By using split horizontal and vertical motor imagery control tasks, they were able to translate one dimensional control to a two dimensional arm and lift an object at any point along a single line (Baxter et al. 2013). More recently, Meng et al. (2016) used an SMR-based EEG paradigm to control a robotic arm to efficiently achieve reach and grasp tasks of an object. This extended the previous study of Baxter et al. (2013) to a 3D workspace by dividing the reach and grasp task into two stages of lower dimensional control, each of which monitors the position of the target object in a horizontal or vertical plane, respectively (Meng et al. 2016). A more advanced method is employing SMR accompanied by shared control. This idea allows an intelligent machine to share control of more complicated maneuvers with an operator to decrease the mental workload. A study from Li et al. (2014) used shared control along with five different motor imagery tasks to control a manipulator in a three dimensional space. Murguialday et al. (2007) used a three-state EEG to control a prosthetic hand using SMR. This hand was equipped with force and angle sensors that provided vibrotactile feedback to the user’s forearm. Motor imagery can also be used to move an advanced robotic arm in an approach named as a synergetic motor learning algorithm (Bhattacharyya et al. 2016). This approach was employed by Bhattacharyya et al. (2016) in a one dimensional area where a user only needed to focus on the end point and the machine adapted to the environment to reach that destination. SMR-based EEG has also been used to control artificial limbs (Aiqin et al. 2011; Kreilinger et al. 2012; Vidaurre et al. 2016), hand prosthesis (Hazrati et al. 2014), and hand orthosis (Chen et al. 2009). The foremost downside of SMR-based BMI systems is the requirement of a lengthy training protocol that can take days or even weeks. SMR paradigms are not a natural way of controlling limbs and potential neural prostheses since these paradigms often require subjects to learn how to modulate specific frequency bands of neural activity in order to move the cursor in a determined direction and acquire satisfactory performance (Bradberry et al. 2011).

Other EEG paradigms have also been exploited to control neuroprosthetic devices. One approach used “focused” and “meditative” mind states to control vertical movements of a prosthetic hand in one dimension (Sequeira et al. 2013). Muller-Putz and Pfurtscheller (2008) employed a steady state visual evoked potential (SSVEP)-based BMI to manipulate wrist rotation and hand grasping in a simple prosthetic hand. Error-related potentials and observation-based methods have also been explored in robotic devices (Iturrate et al. 2015; Agashe et al. 2014, 2015). Kim et al. (2015) used the reconstructed trajectories from real hand movement to move a 6-DOF robotic arm in three dimensions. These paradigms map discretely extracted information from EEG due to external stimulations (e.g. flicking LEDs in different frequencies for SSVEP) to different directions of movement in external prosthetic devices. It is noteworthy to mention that they do not provide a natural approach for prosthetic control (Bradberry et al. 2011).

Some other BMI works combined the aforementioned EEG paradigms to control a robotic arm in reaching and manipulation tasks. Hortal et al. (2015) accomplished real-time two dimensional control by using SMR as well as mental tasks to discern between horizontal and vertical motion. This task only focused on reaching points in a workspace and not actual manipulation of the environment. Horki et al. (2011) controlled a 2-DOF artificial arm with a hybrid paradigm of SSVEP and SMR. Motor imagery was employed for grasp function and SSVEP was chosen to control elbow extension and flexion in the developed artificial arm. Hybrid EEG paradigms have also been utilized to control hand orthosis (Pfurtscheller et al. 2010) and robotic arms for rehabilitation purpose (Bhattacharyya et al. 2014; Luth et al. 2007).

The paradigm based on imagined body kinematics (IBK) is widely used in the invasive domain (Taylor et al. 2002; Kim et al. 2008; Bacher et al. 2015). This paradigm represents the kinematics of natural imaginary movements in a limb (usually the end-effector) in time domain. More recently, this paradigm has been successfully adopted for noninvasive EEG studies to control a computer cursor in a fast training protocol (some minutes of training) based on natural imagined movements of a subject’s dominant hand (Bradberry et al. 2011; Abiri et al. 2015a). It was found that low frequency EEG signals (less than 1 Hz) contain the necessary kinematic information that can be used to manipulate external neuroprosthetic devices (Bradberry et al. 2011; Abiri et al. 2015a). Our previous works employed this paradigm (IBK) to manipulate gestures of a humanoid robot (Abiri et al. 2015b, 2017) and harness the lateral movements of a quadcopter (Abiri et al. 2016). In contrast to the SMR paradigm that requires a long training time (weeks to months), the IBK paradigm confirmed positive performance in BMI tasks after only several minutes of training time. This article aims to develop a new BMI platform based on IBK paradigm to control a robotic arm to achieve a sequence of movements in a manipulation task. The sequence movement imposes a great challenge compared to previous tasks (Abiri et al. 2015b, 2016, 2017) since the success of the task requires the subject to successfully conduct a number of predefined operations in a given order. This sequence-based task is unlikely to be achieved through success by chance. It may serve as a robust metric for evaluation of BMI performance.

2 Materials and methods

This section discusses the subject training protocol and the BMI platform. The Institutional Review Board at the University of Tennessee approved the experimental procedure. Three subjects participated in a virtual cursor control task. Later, one of the subjects was recruited to interact with a robotic arm to perform a manipulation task in a preliminary study. During all experiments, EEG signals were acquired using a wireless headset called Emotiv EPOC ( with 14 channels and through BCI2000 software (Schalk et al. 2004). The Emotiv EPOC had a sampling rate of 128 Hz, and the channel layout is presented in Fig. 1. Unlike ECoG, single units, and LFP that measure localized signals from a specific cortex, EEG signals contain information from multiple cortices (Slutzky and Flint 2017). Bradberry et al. (2011) demonstrated that IBK-encoded cursor movement has contribution from multiple neural regions, a significant part of which was covered by the EPOC electrodes. Moreover, the decoding approach in this work is a multiple regression model using all electrodes. As a confirmation, previous studies show that the EPOC system can be used to control BMI platforms using sensorimotor algorithms, which are believed to generate mainly in the motor cortex and somatosensory cortex (Bhattacharyya et al. 2016).
Fig. 1

Channel layout for the Emotiv EPOC 14-channel headset. The electrodes are arranged according to the international 10–20 system. The reference electrodes are in the CMS/DRL noise canceling configuration corresponding to the P3/P4 locations (

2.1 Cursor control experiments

Three healthy subjects (right-handed; average age 23) participated in the training protocol after giving informed consent. The purpose of the training tasks was to familiarize the naive subjects before direct interaction with the developed BMI robotic platform.

2.1.1 Training tasks

A PC with dual monitors was used to facilitate the computerized training tasks. The refresh rate for the monitors was 60 Hz. One monitor was designated for the experimenter and the other was viewed by the subject. The protocol consisted of three phases of a virtual cursor control task. Phase one was the training phase. In this phase, the subject was asked to sit comfortably in a fixed chair at an arm’s length away from the monitor with hands resting on the lap. The computer cursor was pre-programmed to move in one dimension (right–left/up–down) on the monitor. The subject was instructed to maintain focus and simultaneously imagine following the cursor with his/her dominant hand. Subjects were allowed to keep natural eye movements while preventing any other overt muscle movements. The training phase consisted of five trials for left/right and five trials for up/down cursor movements. Each trial lasted 60 s.

Phase two was the calibration phase. A multiple regression model was trained to decode the velocity of the cursor as a function of the EEG waves of the subject. Coefficients of the regression model were then manually adjusted to ensure the magnitudes of the predicted velocities were enough to cover the workspace of the screen in the allotted time for each trial. The trial time was selected to be 15 s in order to prevent fatigue for the subject. By incorporating a gain value within the range of 1–2, the cursor velocity was adjusted to reach the edges of the screen in an appropriate time without being overly sensitive. This developed neural decoder was then redirected into the BCI2000 software during phase three to test the performance of the subject. The test phase consisted of 40 trials of cursor control in a four-target acquisition task. The workspace for the cursor was 33 cm × 33 cm. The cursor diameter was chosen to be 1.5 cm (0.20% of workspace) and targets were 2.4% of workspace with width 8% and length 30% of screen width. In each trial, one target was randomly highlighted among the four targets located around the edges of the screen in the left, right, top, and bottom positions. The subjects were asked to hit the target by using imagined hand movement while avoiding the other three targets.

2.1.2 Decoding brain signals

Many decoding methods for EEG data have been investigated in both frequency and time domains, with the frequency domain being the most prevalent for SMR based studies (McFarland et al. 2010; LaFleur et al. 2013; Baxter et al. 2013; Li et al. 2014; Wolpaw et al. 1991; Wolpaw and McFarland 2004; Royer et al. 2010; Doud et al. 2011; Hazrati and Hofmann 2013; Xia et al. 2015). Meanwhile, in time domain, researchers resorted to linear regression models upon temporal features of EEG as a common method for decoding brainwave data for offline kinematics decoding (Bradberry et al. 2009, 2010; Antelis et al. 2013; Ofner and Muller-Putz 2012; Ubeda et al. 2013) and real-time implementation (Bradberry et al. 2011). Many previous works suggested that velocity encoding/decoding indicated the most potential in prediction among the kinematics parameters of the body (Bradberry et al. 2009, 2010; Ofner and Muller-Putz 2012). Therefore, we decoded and mapped the acquired EEG data to the observed cursor velocities in horizontal and vertical directions. In other words, the aim of this procedure was to reconstruct the subject’s imagined trajectories from EEG data and obtain a calibrated decoder by transferring all the collected data to MATLAB software ( for analysis and development. The EEG signals collected using the Emotiv Epoc were filtered within the frequency band of 0.16 and 43 Hz ( BCI2000 then applied an intrinsic low-pass filter with a cut-off frequency of 30 Hz (Schalk et al. 2004). Then, a zero-phase, forth-order, low-pass Butterworth filter with a cutoff frequency of 1 Hz was applied to further filter the EEG data. As a result, the signals used for IBK decoding were in the frequency range between 0.16 and 1 Hz. In previous studies, it was confirmed that the low frequency signals from sensorimotor contain information of imagined body kinematics (Bradberry et al. 2011). A multiple regression algorithm was developed to estimate the output velocities at time t in horizontal direction (u[t]) and vertical direction (v[t]) as follows:
$$ u\left[ t \right] = a_{0x} + \mathop \sum \limits_{n = 1}^{N} \mathop \sum \limits_{k = 0}^{K} b_{nkx} e_{n} \left[ {t - k} \right] $$
$$ v\left[ t \right] = a_{0y} + \mathop \sum \limits_{n = 1}^{N} \mathop \sum \limits_{k = 0}^{K} b_{nky} e_{n} \left[ {t - k} \right] $$
where e n [t − k] is the measured voltage for EEG channel n at time t − k. The total number of EEG channels is N = 14 and the total lag number based on our report in previous studies (Abiri et al. 2015a, 2016, 2017) is chosen to be K = 13. The choice of 13 lag points was implemented to include the information from around 100 ms of previous EEG data necessary for prediction (13 points at 128 Hz corresponds to 102 ms) (Bradberry et al. 2011). Meanwhile, a and b are the parameters which could be calculated and optimized in order to obtain the minimum error between the predicted and the real velocities.

The collected EEG data and cursor kinematics information from training sessions were used in Eqs. 1 and 2 for offline analysis. For each subject, the linear regression model was trained and tested using leave-one-trial-out cross validation on the data from all five trials in each direction. Then, all the five extracted sets of the linear regression coefficients (a and b) were averaged and reported as the trained neural decoder. These averaged coefficients were employed in the BCI2000 filter module for online processing and real-time control of cursor movement during the aforementioned four-target test phase. A custom Goodness-of-Fit (GoF) metric was introduced to evaluate the predictive capabilities of the model. This performance metric is computed by segmenting each testing trial into chunks of 5 s and calculating the Pearson correlation between the predicted and actual cursor velocities. Then, the averaged value of the calculated Pearson correlation over chunks of a trial is defined as GoF for the trial:

$$ {\text{GoF}} = \frac{1}{M}\mathop \sum \limits_{i = 1}^{M} Corr\left( {{\text{V}}_{\text{decoded}}^{i} , {\text{V}}_{\text{observed}}^{i} } \right) $$
where \( {\text{V}}_{\text{decoded}}^{i} \) and \( {\text{V}}_{\text{observed}}^{i} \) represent the decoded velocity and the observed velocity for the ith segment, respectively. Since a trial is 60 s, the number of segments for each trial is M = 12.

2.2 Developed BMI robotic platform

This subsection discusses details of the hardware and software employed in our developed BMI platform. Figure 2 shows the completed robotic BMI platform and the overall schematics of data flow from the brain to the robotic arm. By programming in the BCI2000 software and the interface software (see the following subsection), imagined body kinematics tasks (from the subject) were mapped to particular real movements in the robotic arm. The subject would be able to monitor their performance in real-time. The proposed platform as a total system operated as a closed-loop control system to manipulate the objects in the environment. A brief description for each part of the platform is presented below.
Fig. 2

The complete hardware and software components for the developed sequence-based brain machine interface platform to discretely control a robotic arm in manipulation tasks. The subject receives visual feedback from the sequence trials of cursor control task to hit a target. The robotic arm performs the corresponding movements for the acquired target during the inter-trial times while the subject is watching its movement

2.2.1 6-DOF robotic arm

As it is shown in Fig. 2, a low-cost 6-DOF robotic arm (DFRobot) was used for real-time interaction with the environment (DFRobot). The robot arm featured six servo motors that allowed complete control in a 180º workspace. The six motors, after individual testing, were connected to a Raspberry Pi board with an attached servo driver. The servo driver used pulse width modulation (PWM) to control the motors on the robotic arm. The Raspberry Pi and servo driver communicate with I2C communication protocol. Individual servo motor motion for the robotic arm was controlled via Python programming on the Raspberry Pi board.

A custom code in Python was used to determine the optimum PWM values for controlling individual servo motors in an efficient way. Using inverse kinematics, the joint angles were determined for an initial open-loop test of the robotic arm. In this test, the sequence of joint angle changes necessary for completing a task of grasping an object on the left side of the operational area and dropping it on the right side was determined. The test was then divided into the following six segments: left rotation, bending down and grasping, lifting the object, right rotation, bending down and releasing, as well as returning to the upright position. These segments were chosen to correspond to the onscreen target locations during the manipulation task.

2.2.2 Software interface

As mentioned in the previous subsection, Python programming plays the role of the interfacing software to receive the command signals from the brain interface side (BCI2000) and translate them to external commands for control of the robotic arm. The communication between these two interfaces happen using User Datagram Protocol (UDP).

All EEG signals were transmitted by Bluetooth from the Emotiv EPOC headset to BCI2000 where they were decoded into intended velocities during the sequences of separated trials. Subsequently, the generated velocities were used as a control signal for the cursor. Results from target acquisition trials were sent to the Raspberry Pi via Python code and through UDP protocol where it was translated to the necessary commands for the robotic arm.

2.2.3 Robotic arm control using BMI platform

In order to prepare the complete BMI platform for testing, the cursor control task is modified. Four targets will be simultaneously shown to the subject in the same manner as the testing phase described in Sect. 2.2.1. During control of the robotic arm, no target will be highlighted or instructed for the subject. The subject will be allowed to freely select targets that are associated with specific actions of the arm. In other words, the subject is given a maximum trial time of 15 s to select a target corresponding to the desired action, then the joint movements associated with those actions are adjusted during the inter-trial period which is set to be 7 s. In order to complete the manipulation task in a minimum time, the subject should follow a sequence of target acquisition to discretely control the robotic arm.

This results in a basic shared control model in which the subject issues a high-level command for an intended action and the program carries out the more complex maneuvers. In the following, an example of programming for the robotic arm and the performance of a manipulation task is provided.

Figure 3 illustrates a schematic of the employed protocol for controlling the robotic arm in this study. When the left target is hit (left imaginary movement), it is programmed so that the base of the robotic arm rotates from an upright starting position to the left position on the workspace corresponding to the target location (from 90° to 147°). The bottom target (down imaginary movement) is programmed in such a way that the first time it is hit, the shoulder, elbow, and wrist will turn over the target location and the gripper will close, while the second time it is hit, the elbow, wrist, and shoulder will turn over the target location and the gripper will open. The variation of the elbow, wrist, shoulder and gripper are as follows:
Fig. 3

A simple schematic of the robotic arm with the employed workspace. As an example of the programmed manipulation task, the type of imaginary movement and its corresponding effect on the robotic arm is shown. The initial position of the object and the desired final location are illustrated. The middle vertical line shows the starting/ending angle of robotic arm in a run of manipulation task

  • Elbow: from 90° to 30°

  • Wrist: from 90° to 65°

  • Shoulder: from 90° to 48°

  • Gripper: from 180° to 0° if the gripper is initially in the open position, and from 0° to 180° if the gripper is initially in the closed position

The right target (right imaginary movement) is programmed to rotate the base of the robotic arm from the left position on the workspace to the desired target location on the right side of the workspace (from 147° to 67°). The top target (up imaginary movement) will return the elbow, wrist, and shoulder back to the upright position as well as rotating the base to the starting position if it is initially in the right rotated position. The variation of the elbow, wrist, shoulder and base are as follows:
  • Elbow: from 30° to 90°

  • Wrist: from 65° to 90°

  • Shoulder: from 48° to 90°

  • Base: will remain in its current position if it is initially to the left of the starting position, or it will rotate from 67° to 90° if it is initially to the right of the starting angle

In order to perform the manipulation task, one subject from the training protocol was recruited to perform the robotic arm control task. The subject was familiarized with the way that the robotic arm was programmed and could be controlled. The EEG data collected during the cursor trial was used in real-time to control the movements of the robot arm. The optimized control sequence in an example of the manipulation task consisted of a six-step trial sequence for the arm to pick up an object on the left side of the operational area, drop it on the right side, and return to its original position. This corresponds to acquire a target sequence of left, down, up, right, down, up to complete the task in the least number of steps.

3 Results

Figure 4 shows an example of the decoded cursor velocity for one of the recruited subjects during a cross-validated horizontal and vertical training trial. The decoding method postulates an acceptable estimation of velocities with a GoF of 91.85 and 67.47% for velocity in horizontal and vertical direction, respectively. Refer to Sect. 2.1.2 for a description on the calculation for GoF. Additionally, Table 1 reports the test phase results for 40 trials of target acquisition in the cursor control task. Four targets were simultaneously presented around the screen, and subjects were instructed to hit the randomly highlighted target. Each subject performed all 40 trials. The overall performance of subjects in hitting the highlighted target is reported as 77.5%. The performance result is comparable with noninvasive work by Meng et al. (2016) and invasive study published by Schalk et al. (2008).
Fig. 4

Observed cursor velocity (dashed) and decoded cursor velocity using the regression model for a sample subject. Data is based on the training phase experiment for the subjects. Vx represents the horizontal velocity and Vy represents the vertical velocity

Table 1

Average hit rate for individually highlighted targets for each subject during the test phase of cursor control with simultaneous presence of four targets on the monitor






Mean (STD)






Subject 1





95.0 (5.8)

Subject 2





62.5 (12.5)

Subject 3





75.0 (12.9)

Mean (STD)

83.3 (20.8)

83.3 (15.2)

73.3 (20.8)

70.0 (17.3)

77.5 (14.9)

The recruited subject for the robotic arm control task participated in 10 runs of the manipulation task. Figure 5 illustrates two snapshots of the subject controlling the robotic arm. Figure 5a shows the upright position of the arm at the starting position while the subject is performing a trial of cursor control task to hit the correct target (left target) and bring the robotic arm to the position of the object. Figure 5b is another snapshot showing the robotic arm moving to place the object on the right while the subject receives visual feedback by watching the movements of the arm during the inter-trial phase (see Online Resource 1).
Fig. 5

a The robotic arm is in a fixed position (upright) while the subject is controlling the cursor on the monitor in a trial. b The robotic arm is performing a specific task (dropping the object on the right side of the workspace) during an inter-trial time while the subject is receiving feedback from the robotic arm (see Online Resource 1 for a video of the actual experiment)

Figure 6 shows the changes in the joint angles of the robotic arm during all trials and inter-trials of one run to finish the described manipulation task. These joint angles are from a run where the subject followed the optimized steps of control. It took about 1 min to complete the manipulation task. Table 2 shows the results for 10 runs of the manipulation task and the trial sequences that were performed by the subject. By following the correct sequence (runs 1 and 9), the subject was able to control the robotic arm in minimum time with a minimum number of steps. For some other successful tasks, the subject made a mistake in the sequence but was able to correct the mistake and complete the goal. In the failed tasks, the subject could not correct his mistake and the manipulation task was aborted. Overall, the success rate for the subject performance was 70% for 10 runs.
Fig. 6

Servo angles vs. Time for run 1, a successful task with no mistakes (see Table 1), in controlling robotic arm in manipulation task. End of trial marks the time when a target is hit and the command signal is read by Python. The robotic arm movement is carried out during the inter-trial phase that lasts 7 s. The cursor control task is carried out during the trial period that can last up to 15 s. After completion of the manipulation task, the final position of the arm is the same as the initial position

Table 2

The performance of the subject for 10 runs of controlling the robotic arm to pick up an object on the left side of the workspace, drop it on the right side, and return to the original position

In order to achieve this, the subject is instructed to follow the optimized target sequence of left (L), bottom (B), top (T), right (R), bottom, top during cursor control trials. Failure is classified as the block being dropped in an unreachable position, or the subject giving up

4 Discussion

Recently, BMI systems have become of interest among researchers in order to assist those with movement impairments to independently perform the activities of daily life. Invasive approaches have reported major success in controlling external devices with the brain. However, these approaches, due to surgical operations and degrading signals, are not a proper solution for all patients. In recent years, noninvasive methods have shown some promising results in Brain Machine Interfaces. Among noninvasive approaches, EEG has become a popular method due to its low-cost, convenient recording, and portability.

In BMI systems, the mutual interaction and cooperation between the subject and the external device plays a crucial role in completing a particular task (Millán 2015). Nonstationary brain activities require an intelligent external device to perform different tasks. BMI systems based on a shared control approach are designed to deal with this problem (Fifer et al. 2014; Meng et al. 2016; Li et al. 2014). By employing this approach, the subject does not need to be continuously and directly involved with every step of performing a task since the intelligent external device is responsible for performing the more complex maneuvers. Although the direct involvement of a subject with a BMI system engenders a more natural feeling of control, the continuous involvement during performance of the task causes fatigue for the subject. The ideal BMI system with mutual interaction can be described as a supervisory control system; the subject with minimum involvement intends to perform a specific task while supervising (watching) an intelligent external device execute and complete the task in a natural way.

In this paper, a sequence-based BMI system was offered to discretize the manipulation task into multiple sequence steps. A subject is voluntarily controlling a cursor on a monitor during a trial time to hit a target that generates a command signal to manipulate the robotic arm during the inter-trial times. This method acts as a basic shared control model where the user selects the intended action and the intelligent program carries out the complex maneuvers for the robotic arm. For example, when the user wished to place the object down on the workspace, he issues the high level command by selecting the bottom target. The intelligent program then carries out the more complex motions for the shoulder, elbow, and wrist joints and detects if it should open or close the gripper. This sequence-based BMI system is designed to reduce the engagement of a subject in controlling and mapping point-time to point-time of control in an external device for the purpose of preventing fatigue for the subject. It was shown that in some runs of the manipulation task, the intelligent BMI system was robust to subject’s mistakes and allowed the subject the opportunity for correction.

By using this approach, a more complex manipulation task can be programmed to be performed by the BMI robotic platform. For example, it can be programmed to pick up an object from any arbitrary location in a 2D workspace and drop it to any other location in the same workspace. This can be achieved by discretizing the manipulation task and developing more intelligent programming. The same approach can be employed on other robotic devices which are naturally unstable when controlled by the brain such as drones. To further improve the robustness of the developed model, more trials from each subject should be collected along with a greater subject population. Cross validation is a common approach to avoid information leaking (overfitting) in machine learning. Under the limitation of a small number of trials in this study, leave-one-out cross validation is a robust estimate of the regression results.

5 Conclusion and future works

A noninvasive and sequence-based Brain Machine Interface was developed to control a robotic arm in a manipulation task. Although BMI systems using invasive technologies have achieved remarkable results, this field of work in noninvasive technologies is still in its infancy. Here, multiple subjects were trained to control a computer cursor on a monitor to hit one highlighted target among four presented targets using the imagined body kinematics paradigm. Overall the subjects demonstrated good performance in controlling the virtual cursor just after 10 min of training. The developed cursor control task was then modified and employed in a sequence-based arbitrary target acquisition mode to discretely control a robotic arm in a manipulation task. One subject took part in controlling the robotic arm with the brain. The subject was responsible to voluntarily control the cursor to hit arbitrary targets during a trial while the robotic arm performed a particular movement during the inter-trial times. The subject was aware of an optimized sequence of hitting the correct targets to control the robotic arm in the manipulation task. 10 runs of the manipulation task were performed by the subject. It took about one to two minutes to finish one manipulation task in the 2D workspace. Overall the subject had a success rate of 70%.

It is shown in the literature that some subjects are not able to effectively control the cursor even after extensively training, a phenomenon known as BCI illiteracy. The occurrence of BCI illiteracy is estimated to be 15–30% (Vidaurre and Blankertz 2010). In our pilot study, one of the three recruited subjects demonstrated very low BCI controllability, an indication of BCI illiteracy. To improve the controllability for people with BCI illiteracy, more advanced machine learning techniques need to be developed. Future work should also recruit more subjects to further improve the robustness and accuracy of the platform.

Although a fair amount of research has been conducted using BMI in virtual applications such as cursor control, there have been limited implementations in the noninvasive domain, particularly in the control of physical devices performing multi-dimensional tasks. This work serves as a pilot study to demonstrate control of a robotic arm in a two-dimensional workspace. In future work, we plan to adapt our protocol to better represent an activity of daily life (e.g. picking up a labeled cup among multiple cups with robotic arm). We will also like to observe how the subject interacts with the platform over time to enable control of an assistive device by persons with motor impairments.



This work was in part supported by a NeuroNET seed grant to XZ; and in part by the NIH under grants NIH P30 AG028383 to the UK Sanders-Brown Center on Aging, NIH AG00986 to YJ, and NIH NCRR UL1TR000117 to the UK Center for Clinical and Translational Science. JK’s work was partially supported through a summer internship from the Office of Undergraduate Research at The University of Tennessee.

Supplementary material

Supplementary material 1 (MP4 24822 kb)


  1. Abiri, R., et al.: EEG-based control of a unidimensional computer cursor using imagined body kinematics. In: Biomedical Engineering Society Annual Meeting (BMES 2015). 2015aGoogle Scholar
  2. Abiri, R., et al.: A real-time brainwave based neuro-feedback system for cognitive enhancement. In: ASME 2015 Dynamic Systems and Control Conference (Columbus, OH). 2015bGoogle Scholar
  3. Abiri, R., et al.: Planar control of a quadcopter using a zero-training brain machine interface platform. In: Biomedical Engineering Society Annual Meeting (BMES 2016). 2016Google Scholar
  4. Abiri, R., et al.: Brain computer interface for gesture control of a social robot: an offline study. In: 2017 Iranian Conference on Electrical Engineering (ICEE). IEEE, New York, 2017Google Scholar
  5. Agashe, H., Contreras-Vidal, J.L.: Observation-based training for neuroprosthetic control of grasping by amputees. In: Engineering in Medicine and Biology Society (EMBC), 2014 36th Annual International Conference of the IEEE. IEEE, New York, 2014Google Scholar
  6. Agashe, H.A., et al.: Global cortical activity predicts shape of hand during grasping. Front. Neurosci. 9, 121 (2015)CrossRefGoogle Scholar
  7. Aiqin, S., Binghui, F., Chaochuan, J.: Motor imagery EEG-based online control system for upper artificial limb. In: International Conference on Transportation, Mechanical, and Electrical Engineering (TMEE), 2011. 2011Google Scholar
  8. Antelis, J.M., et al.: On the usage of linear regression models to reconstruct limb kinematics from low frequency EEG signals. PLoS ONE 8(4), e61976 (2013)CrossRefGoogle Scholar
  9. Bacher, D., et al.: Neural point-and-click communication by a person with incomplete locked-in syndrome. Neurorehabilit Neural Repair 29(5), 462–471 (2015)CrossRefGoogle Scholar
  10. Baxter, B.S., Decker, A., He, B.: Noninvasive control of a robotic arm in multiple dimensions using scalp electroencephalogram. In: 2013 6th International IEEE/EMBS Conference on Neural Engineering, (NER) 2013. IEEE, New York (2013)Google Scholar
  11. Bhattacharyya, S., Konar, A., Tibarewala, D.: Motor imagery, P300 and error-related EEG-based robot arm movement control for rehabilitation purpose. Med. Biol. Eng. Comput. 52(12), 1007–1017 (2014)CrossRefGoogle Scholar
  12. Bhattacharyya, S., Shimoda, S., Hayashibe, M.: A synergetic brain-machine interfacing paradigm for multi-DOF robot control. IEEE Trans. Syst. Man Cybern. Syst. 46(7), 957–968 (2016)CrossRefGoogle Scholar
  13. Bhuiyan, M., Choudhury, I., Dahari, M.: Development of a control system for artificially rehabilitated limbs: a review. Biol. Cybern. 109(2), 141–162 (2015)CrossRefGoogle Scholar
  14. Bradberry, T.J., Gentili, R.J., Contreras-Vidal, J.L.: Decoding three-dimensional hand kinematics from electroencephalographic signals. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2009, 5010–5013 (2009)Google Scholar
  15. Bradberry, T.J., Gentili, R.J., Contreras-Vidal, J.L.: Reconstructing three-dimensional hand movements from noninvasive electroencephalographic signals. J. Neurosci. 30(9), 3432–3437 (2010)CrossRefGoogle Scholar
  16. Bradberry, T.J., Gentili, R.J., Contreras-Vidal, J.L.: Fast attainment of computer cursor control with noninvasively acquired brain signals. J. Neural Eng. 8(3), 036010 (2011)CrossRefGoogle Scholar
  17. Carmena, J.M., et al.: Learning to control a brain–machine interface for reaching and grasping by primates. PLoS Biol. 1(2), e42 (2003)CrossRefGoogle Scholar
  18. Chen, C.W., Lin, C.C.K., Ju, M.S.: Hand orthosis controlled using brain–computer interface. J. Med. Biol. Eng. 29(5), 234–241 (2009)Google Scholar
  19. Collinger, J.L., et al.: High-performance neuroprosthetic control by an individual with tetraplegia. Lancet 381(9866), 557–564 (2013)CrossRefGoogle Scholar
  20. Doud, A.J., et al.: Continuous three-dimensional control of a virtual helicopter using a motor imagery based brain-computer interface. PLoS ONE 6(10), e26322 (2011)CrossRefGoogle Scholar
  21. Fifer, M.S., et al.: Simultaneous neural control of simple reaching and grasping with the modular prosthetic limb using intracranial EEG. IEEE Trans. Neural Syst. Rehabil. Eng. 22(3), 695–705 (2014)CrossRefGoogle Scholar
  22. Gilja, V., et al.: A high-performance neural prosthesis enabled by control algorithm design. Nat. Neurosci. 15(12), 1752–1757 (2012)CrossRefGoogle Scholar
  23. Guger, C., et al.: Prosthetic control by an EEG-based brain-computer interface (BCI). In: Proceedings of AAATE 5th European conference for the advancement of assistive technology, 1999Google Scholar
  24. Hazrati, M.K., Hofmann, U.G.: Avatar navigation in Second Life using brain signals. In: IEEE 8th International Symposium on Intelligent Signal Processing (WISP), 2013. IEEE, New York, 2013Google Scholar
  25. Hazrati, M.K., et al.: Controlling a simple hand prosthesis using brain signals. Biomed. Eng./Biomed. Tech. 59, 1152–1155 (2014)Google Scholar
  26. He, B., et al.: Noninvasive brain-computer interfaces based on sensorimotor rhythms. Proc. IEEE 103(6), 907–925 (2015)Google Scholar
  27. Hochberg, L.R., et al.: Reach and grasp by people with tetraplegia using a neurally controlled robotic arm. Nature 485(7398), 372–375 (2012)CrossRefGoogle Scholar
  28. Horki, P., et al.: Combined motor imagery and SSVEP based BCI control of a 2 DoF artificial upper limb. Med. Biol. Eng. Comput. 49(5), 567–577 (2011)CrossRefGoogle Scholar
  29. Hortal, E., et al.: SVM-based brain–machine interface for controlling a robot arm through four mental tasks. Neurocomputing 151, 116–121 (2015)CrossRefGoogle Scholar
  30. Iturrate, I., et al.: Teaching brain-machine interfaces as an alternative paradigm to neuroprosthetics control. Sci. Rep. 5, 13893 (2015)CrossRefGoogle Scholar
  31. Kim, S.P., et al.: Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia. J. Neural Eng. 5(4), 455 (2008)MathSciNetCrossRefGoogle Scholar
  32. Kim, Y.J., et al.: A study on a robot arm driven by three-dimensional trajectories predicted from non-invasive neural signals. Biomed. Eng. Online 14(1), 1 (2015)CrossRefGoogle Scholar
  33. Kreilinger, A., Neuper, C., Müller-Putz, G.R.: Error potential detection during continuous movement of an artificial arm controlled by brain–computer interface. Med. Biol. Eng. Comput. 50(3), 223–230 (2012)CrossRefGoogle Scholar
  34. LaFleur, K., et al.: Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface. J. Neural Eng. 10(4), 046003 (2013)CrossRefGoogle Scholar
  35. Li, T., et al.: Brain–machine interface control of a manipulator using small-world neural network and shared control strategy. J. Neurosci. Methods 224, 26–38 (2014)CrossRefGoogle Scholar
  36. Luth, T., et al.: Low level control in a semi-autonomous rehabilitation robotic system via a brain-computer interface. In: IEEE 10th International Conference on Rehabilitation Robotics, 2007. ICORR 2007. IEEE, New York, 2007Google Scholar
  37. McFarland, D.J., Sarnacki, W.A., Wolpaw, J.R.: Electroencephalographic (EEG) control of three-dimensional movement. J. Neural Eng. 7(3), 036007 (2010)CrossRefGoogle Scholar
  38. Meng, J., et al.: Noninvasive electroencephalogram based control of a robotic arm for reach and grasp tasks. Sci. Rep. 6, 38565 (2016)CrossRefGoogle Scholar
  39. Millán, J.D.R.: Brain-machine interfaces: the perception-action closed loop: a two-learner system. IEEE Syst. Man Cybern. Mag. 1(1), 6–8 (2015)CrossRefGoogle Scholar
  40. Miranda, R.A., et al.: DARPA-funded efforts in the development of novel brain–computer interface technologies. J. Neurosci. Methods 244, 52–67 (2015)CrossRefGoogle Scholar
  41. Muller-Putz, G.R., Pfurtscheller, G.: Control of an electrical prosthesis with an SSVEP-based BCI. IEEE Trans. Biomed. Eng. 55(1), 361–364 (2008)CrossRefGoogle Scholar
  42. Murguialday, A.R., et al.: Brain–computer interface for a prosthetic hand using local machine control and haptic feedback. In: 2007 IEEE 10th International Conference on Rehabilitation Robotics, ICORR 2007, 2007Google Scholar
  43. Nicolas-Alonso, L.F., Gomez-Gil, J.: Brain computer interfaces, a review. Sensors 12(2), 1211–1279 (2012)CrossRefGoogle Scholar
  44. Nirenberg, L.M., Hanley, J., Stear, E.B.: A new approach to prosthetic control: eeg motor signal tracking with an adaptively designed phase-locked loop. IEEE Trans. Biomed. Eng. 18(6), 389–398 (1971)CrossRefGoogle Scholar
  45. Ofner, P., Muller-Putz, G.R.: Decoding of velocities and positions of 3D arm movement from EEG. Conf. Proc. IEEE Eng. Med. Biol. Soc. 2012, 6406–6409 (2012)Google Scholar
  46. Pfurtscheller, G., et al.: Self-paced operation of an SSVEP-based orthosis with and without an imagery-based “brain switch:” a feasibility study towards a hybrid BCI. IEEE Trans. Neural Syst. Rehabil. Eng. 18(4), 409–414 (2010)CrossRefGoogle Scholar
  47. Royer, A.S., et al.: EEG control of a virtual helicopter in 3-dimensional space using intelligent control strategies. IEEE Trans. Neural Syst. Rehabil. Eng. 18(6), 581–589 (2010)CrossRefGoogle Scholar
  48. Schalk, G., et al.: BCI2000: a general-purpose brain-computer interface (BCI) system. IEEE Trans. Biomed. Eng. 51(6), 1034–1043 (2004)CrossRefGoogle Scholar
  49. Schalk, G., et al.: Two-dimensional movement control using electrocorticographic signals in humans. J. Neural Eng. 5(1), 75 (2008)CrossRefGoogle Scholar
  50. Schultz, A.E., Kuiken, T.A.: Neural interfaces for control of upper limb prostheses: the state of the art and future possibilities. PM&R 3(1), 55–67 (2011)CrossRefGoogle Scholar
  51. Sequeira, S., Diogo, C., Ferreira, F.J.T.E.: EEG-signals based control strategy for prosthetic drive systems. In: 2013 IEEE 3rd Portuguese Meeting in Bioengineering (ENBENG), 2013Google Scholar
  52. Slutzky, M.W., Flint, R.D.: Physiological properties of brain-machine interface input signals. J. Neurophysiol. 118(2), 1329–1343 (2017)CrossRefGoogle Scholar
  53. Taylor, D.M., Tillery, S.I.H., Schwartz, A.B.: Direct cortical control of 3D neuroprosthetic devices. Science 296(5574), 1829–1832 (2002)CrossRefGoogle Scholar
  54. Ubeda, A., et al.: Linear decoding of 2D hand movements for target selection tasks using a non-invasive BCI system. In: Systems Conference (SysCon), 2013 IEEE International. 2013Google Scholar
  55. Velliste, M., et al.: Cortical control of a prosthetic arm for self-feeding. Nature 453(7198), 1098–1101 (2008)CrossRefGoogle Scholar
  56. Vidaurre, C., Blankertz, B.: Towards a cure for BCI illiteracy. Brain Topogr. 23(2), 194–198 (2010)CrossRefGoogle Scholar
  57. Vidaurre, C., et al.: EEG-based BCI for the linear control of an upper-limb neuroprosthesis. Med. Eng. Phys. 38(11), 1195–1204 (2016)CrossRefGoogle Scholar
  58. Vogel, J., et al.: An assistive decision-and-control architecture for force-sensitive hand–arm systems driven by human–machine interfaces. Int. J. Robot. Res. 34(6), 763–780 (2015)CrossRefGoogle Scholar
  59. Wolpaw, J.R., McFarland, D.J.: Control of a two-dimensional movement signal by a noninvasive brain-computer interface in humans. Proc. Natl. Acad. Sci. USA 101(51), 17849–17854 (2004)CrossRefGoogle Scholar
  60. Wolpaw, J.R., et al.: An EEG-based brain-computer interface for cursor control. Electroencephalogr. Clin. Neurophysiol. 78(3), 252–259 (1991)CrossRefGoogle Scholar
  61. Wright, J., et al.: A Review of control strategies in closed-loop neuroprosthetic systems. Front. Neurosci. 10, 312 (2016)CrossRefGoogle Scholar
  62. Xia, B., et al.: A combination strategy based brain–computer interface for two-dimensional movement control. J. Neural Eng. 12(4), 046021 (2015)CrossRefGoogle Scholar
  63. Yuan, H., He, B.: Brain–computer interfaces using sensorimotor rhythms: current state and future perspectives. IEEE Trans. Biomed. Eng. 61(5), 1425–1435 (2014)MathSciNetCrossRefGoogle Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2018

Authors and Affiliations

  1. 1.Department of Mechanical, Aerospace, and Biomedical EngineeringUniversity of TennesseeKnoxvilleUSA
  2. 2.Department of NeurologyUniversity of CaliforniaSan Francisco/BerkeleyUSA
  3. 3.Department of Behavioral Science, College of MedicineUniversity of KentuckyLexingtonUSA

Personalised recommendations