Brain-Controlled Biometric Signals Employed to Operate External Technical Devices

We present a solution to employ brain-controlled biometric signals to operate external technical devices. Such signals include multiple electrode electroencephalographic (EEG) signals, electromyography (EMG) signals re ﬂ ecting muscle contraction pattern, geometrical pattern of body limb kinematics, and other modalities. Being collected, properly decoded and interpreted, the signals can be used as a control or navigation signals of arti ﬁ cial machines, e.g., technical devices. Our interface solution based on a combination of signals of different modalities is capable to provide composite command and proportional multisite control of different technical devices with theoretically unlimited computational power. The feedback to the operator by visual channel or in virtual reality permits to compensate control errors and provides adaptive control. The control system can be implemented with wearable electronics. Examples of technical devices under the control are presented.


Introduction
The development of technologies of human-machine communications using human intellectual functions, e.g., the brain power, to control technical devices (TD) is, certainly, one of the most promising and dynamically progressing directions of scientific and technological development. Such control strategy will theoretical have almost unlimited computational power. For example, simple brain-controlled grasping movement of a finger (10 15 possible combinations of muscle contraction) can be estimated as 10 6 GHz computer equivalent to choose a correct muscle contraction template in real time [11]. Brain mechanisms of such control remain largely unknown. However, being properly collected, detected, and classified brain-controlled signals from the human body can be further utilized for highly efficient, even intelligent control of multiparameter TDs.
This study mainly focused on the development of new methods and technologies used for remote control of TDs respecting to particular applications. The key purpose was the integration of human biometrical, particularly bioelectrical signals, into a control loop. Such an approach involved online collection and interpretation of different modalities signals, particularly, EEG and multisite EMG signals for different TDs and robotic systems. For this purpose, we developed technical solutions associating patterns of human brain and muscular activity with commands of the controlled object, according to a user-defined translation algorithm.
Despite significant achievements in the field of machine learning, quite recently, it was proved that within the existing theories, the creation of a universal algorithm that can adapt to different conditions in a technical control system was theoretically impossible [24]. One of the expected advantages of brain-controlled devices to traditional control electronics could be the possibility of adaptation due to human brain plasticity. Despite the fact that research in neurointerface electronics is currently one of the key world trends, there have not been yet presented commercial devices for TD control where the control signals generation is based on registration, decoding and wireless transmission signals from the human brain and bioelectrical activity of muscles. During the past decade, several groups of researchers and developers have been formed specialized on the development of portable interfaces that can be applied in everyday life. The best known of these are NeuroSky, MindWave, MindBall, and Emotiv EPOC devices. However, these devices have limited functionality in terms of implementation of multisite adaptive control system. Their most significant drawbacks were long response times and relatively low degree of reliability when it comes to classification of bioelectrical activity patterns.
In this paper, we present a solution of EEG and multisite EMG human-machine interface with the user-defined programmable function of translation of sensory signals to motor command. For simplicity, we further call the solution as "SRD -1" abbreviated from "system for registration and decoding."

Motor Imagery BCI System
Our brain-computer interface (BCI) designed as a part of the SRD-1 consisted of several elements. Some of them were used traditional EEG data analysis tools, while some other classification algorithms were originally developed. Initially designed for a 32-channel EEG cap (including 10-20 system electrodes and two additional rows over the motor cortex), it can also be used with fewer electrodes, although with some density recommended of their placement over motor cortex. For any given montage, the number of channels can be further optimized with one of the two following procedures. The first one was based on common spatial patterns (CSP) and the second one on spatio-spectral decomposition (SSD) [18].
Physiological basis behind the developed BCI was the phenomenon of sustained desynchronization of the sensory-motor rhythm (SMR) over motor cortex [27]. The designed BCI operated in the so-called asynchronous mode continuously decoding the EEG signals and generating control signal, what permits to a pilot to operate at his/her own place.
The control of any type of BCI required preliminary training, which included both the procedure of BCI pilot obtaining the skill to control the interface and the training of the data processing algorithm with the data from the training recording [15]. This gave us two major fields of potential improvement: first, different practices can be used to improve the learning process of the user. It was shown that certain types of feedback allow more accurate control of the interface (as shown in Fig. 1). Another way was to modify signal processing and data analysis algorithms used to decode commands.
There is a large variety of techniques but the most common sequence of processing steps includes removal of eye artifacts (e.g., with ICA), bandpass filtering, feature vector construction (the method of common spatial patterns, e.g., the CSP, was often used to contrast the data from two classes) and eventually classification. The implemented BCI partly followed this chain. Signal processing algorithm included scanning for the optimal bands (in the 8-24 Hz range) to distinguish each pair of commands, Fig. 1 Neurofeedback signal processing schema building CSP features for each pair and using regularized linear regression as a classifier. Then the output from linear classifiers for all pairs were processed by multilayer perceptron, which permitted performing multiclass classification.
It should be mentioned that the EEG data analysis was hampered by the signal non-stationarity, high noise, and limited amount of training data, which led to classifier overfitting. As one of the proposed solutions to the overfitting problem in this context, we would like to highlight the method of CSP components choice based on the dipole fit of their topographies. The premise used in this solution was that topographies, which can be accurately described by the field of a dipole (e.g., the value of residual fitting error was under certain threshold), can be considered more plausible from the physiological point of view. Thus, such solution is considered as more reliable in the sense that they reflect the activation of the corresponding regions of the cortex and not the artifacts unrelated to the task.
To perform such analysis, the topography corresponding to each of the found common spatial patterns were fitted with a dipole using the process of minimizing the discrepancy between synthetic data and the topography vector. In practice, for given point in physical space, it involved the calculation of the topographies of three orthogonal dipoles situated at this point and the projection of the CSP topography into the space orthogonal to the space of these topographies. The calculation of the dipole topographies was based on the three-layer spherical volume conduction model of the head [2]. Either each point of the three-dimensional coordinate grid was scanned, or a procedure of non-gradient optimization (e.g., Nelder-Mead Simplex [19]) was used to find the coordinates of the dipole minimizing the residual error.

EMG-Based Neuromuscular Interface
Original multielectrode arrays for simultaneous monitoring of multiple muscles were developed. The first layout contained four pairs of medical AgCl electrodes, which were suitable for surface electromyogram recording of the muscle activity. The electrodes were placed on the flexible tissue which then was worn on the arm (Fig. 2a). The experimental results showed that such array was not suitable for long-term recording (several hours), because the electrodes could easily move. The loose contact with the skin and electrodes motion eventually led to EMG signal distortion.
Then we developed the array consisted of 12 silver coated flexible planar electrodes mounted to the flexible substrate to provide maximum attachment to the body surface (Fig. 2b). The registration was performed in bipolar mode, e.g., the muscle signal was sensed by the pairs of electrodes (6 in total) and one reference electrode which was mounted to the place near the elbow. The array was used to decode gestures generated by the arm. We composed the electrodes to match anatomy of the arm-the diameter of the electrode was 1 cm and interelectrode distance was set to 2 cm.
The cover material of the electrodes provided stable and long-term recording with amplitude up to several hundreds of microvolts. In particular, such approach can be used in further development of neurointerface for prosthetic limb control in medicine and rehabilitation and commercial interfaces for everyday usage.
Further, registered signals used for classification of nine static hand gestures as motor patterns. During an experiment, users performed two series of nine gestures each, selected in random order. The first series was learning set while the second was testing one. For extraction of the discriminating features from EMG signals root mean square (RMS) was calculated. Each 50 ms RMS feed to a multilayer artificial neural network (ANN) for the feature classification. The standard error backpropagation algorithm was used for learning. Earlier we suggested to use the network of spiking neurons as a feature extractor instead RMS [12]. In particular, we showed that mutual inhibition contrasted EMG signal from different sensors and improved the classification accuracy.
To optimize the ANN performance, we ran the process of gesture recognition on the same set of data and varying the number of layers of the ANN, the number neurons in hidden layers and the learning rate. Figure 3 shows the results. The ANN error dropped significantly between one and two layers and then lightly arose while learning time increased significantly (Fig. 3a). The similar picture one can see with changing the number of neuron in hidden layers (Fig. 3b). Thus, we selected a network with two layers and eight neurons in the hidden layer for further experiments. We also observed that the learning rate 0.01 optimizes both the learning error and the learning time (Fig. 3c). Thus, this value was used in all experimental tests of the interface.
Our software and hardware implement both command control based on pattern classification and proportional control based on muscle effort estimation. We suggested several schemes of combination these strategies [13,14]. In particular, recognized patterns can be used for choice of movement direction and muscle effort for its speed. Note that personal classification accuracy varied significantly. For example, the accuracy of 9 patterns recognition at 10 pilots was 86.5-98.5%. In this regard, we explored the possibility of improving personal performance by training the user. In this investigation, eight pilots showed positive dynamics in the performance after several days of training which included sEMG-feedback and playing a training game with the sEMG-interface. Figure 4 shows this result in term of normalized classification error. The main progress was achieved to the second training day. However, in any case, there should be some short training course before a pilot can effectively operate with the EMG interface.

Motor Imagery BCI System
To estimate the effect of the proposed enhancements of the EEG data analysis algorithm (see Methods), we used pairs of recordings with the identical paradigm. The first one was used as training data and the second one as testing data. Table 1 shows the results for the topography choice procedure. The CSP components choice based on the dipole fitting improved the overall decoding accuracy. There are a number of ways for the further enhancement the decoding quality with this method. First, a simplified common model was used during dipole fitting. Since the accuracy of dipole localization depends substantially on the head model so this method can be significantly improved by using individual head parameters. Second, the heuristics for the optimal threshold choice can be developed (the results given in Table 1 correspond to the threshold of 15%).
Other proposed methods that leverage additional information included the techniques of both CSP filters and neural network weights regularization with the templates calculated for experienced pilots (users who can generate clearer patterns). It can help preventing overfitting and distinguishing informative features.
The designed BCI took its place among the most advanced systems of its kind (see Table 2 for comparison). The unique combination of features permitted to reach high decoding accuracy of multiple commands while maintaining asynchronous paradigm and requiring relatively few electrodes and short training time.

EMG-Based Neuromuscular Interface
The quality of the sEMG data was comparable to the similar ones found in the literature [7,9,20,21,23,26]. One of the most important characteristics was the low noise level in the raw signal. Spectral characteristics correspond to the expected values. A significant part of the signal was localized in the range of 20-300 Hz (Fig. 5). The accuracy of the pattern classification algorithm was 92% ± 4% in the case of nine gestures and 97 ± 2% in the case of six gestures in the command control mode [12]. Such quite a high accuracy rate was very close to the attainable limit ("error-free") in the development of human-machine interfaces ( Table 3).
The complete SRD-1 device consisted of the EEG-and EMG-modules and permitted to control of TDs by both brain intention and muscle effort pattern. The   [17]. We used standard SDKs developed by manufacturers of the devices. The connection between the SRD-1 and tested TD was wireless using Bluetooth for LEGO or Wi-Fi for NAO, and for the exoskeleton. If SDK had a support of movement instructions, we sent a macro command directly (e.g., "go forward" in case of NAO and exoskeleton). Otherwise, we implemented required macro command in our software and sent to the device elementary command (e.g., "rotate motor A with speed x%" in case of LEGO). According to our results, it was succeeded to achieve correct control of tested TD by means of the signals of bioelectric activity of the pilot interpreted by the hardware and software system to robotic commands (Fig. 6).

Conclusions
We developed a technical solution of collecting, decoding, and translating multimodal biometric data to control external TDs. Novel algorithms for the classification of human bioelectric activity patterns were developed. In particular, a new approach to the implementation of muscle activity patterns classification was proposed, using a hybrid neural network, thus demonstrating an ability to classify 9 patterns with an accuracy of up to 98.3%. Furthermore, during the study, a possibility to achieve an acceptable (>75%) median accuracy of classification for four motor-imaginary commands, according to EEG activity signals, was demonstrated. Special attention should be paid to the fact that the specified accuracy has been achieved using only 7 electrodes, which is important, from the standpoint of practical use of the developed complex. The experimental studies of the developed experimental model of the software and hardware complex of recording and decoding system (SRD-1) have been Fig. 6 a User interface of the SRD-1 programmable translator, and b SRD-1 control of NAO robot by means of the signals of bioelectric muscle and brain activity performed. During operational testing, the SRD-1 showed correct operations when controlling real TD (Aldebaran Robotics NAO and an exoskeleton for the lower limbs). We hope that the results of the study will be useful in many areas of human life and activity, including rehabilitation medicine, industrial robotics, and virtual reality systems. We believe that the development of the software and hardware complex designed for interpretation of human bioelectric activity signals will provide a quantum leap in the technical development of these areas and devices expanding human capabilities become a reality.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.