Abstract
The high potential for creating brain-computer interfaces (BCIs) and video games for upper limb rehabilitation has been demonstrated in recent years. In this work, we describe the implementation of a prototype BCI with feedback based on a virtual environment to control the lateral movement of a character by predicting the subject’s motor intention. The electroencephalographic signals were processed employing a Finite Impulse Response (FIR) filter, Common Spatial Patterns (CSP), and Linear Discriminant Analysis (LDA). Also, a video game was used as a virtual environment, which was written in C# on the Unity3D platform. The test results showed that the prototype implemented based on electroencephalographic signal acquisition has the potential to take on real-time applications such as avatar control or assistive devices, obtaining a maximum control time of 65 s. In addition, it was noticed that the feedback in an interface plays a crucial role, since it helps the person not only to feel motivated, but also to learn how to have a more consistent motor intention and when little calibration data is recorded, the probability that the system makes erroneous predictions increases. These results demonstrate the usefulness of the development as support for people who require some treatment in the form of upper limb motor rehabilitation, and that the use of virtual environments, such as video games, can motivate such people during the rehabilitation processes.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
One of the main problems when undergoing traditional rehabilitation in the upper extremities of the human body is related to their efficiency and the motivation of the patient [1]. The development of BCIs opens up the possibility of creating systems that are more efficient than traditional procedures, since they have the ability to interpret the subject’s motor intentions, allow for their training and that of the BCI system, maintain their attention, and contribute to increasing their motivation [2,3,4,5].
However, there are certain obstacles when designing and implementing BCI systems that must be taken into account during system development [6]. These obstacles are related to: neurological problems, technological difficulties, ethical concerns, non-stationarity of data from the same subject (changes in signal patterns over time), signal acquisition hardware, information transfer rate, and even the training process itself [2, 7,8,9,10,11].
Regarding the training process, it has been identified that the feedback in an interface helps the person to learn to generate an activity whose motor intention is more consistent and therefore is essential to provide the subject with the feedback to identify if they have executed the mental task correctly [12]. This will consequently help the person to have better control of their brain activity, their motor functions, the BCI application, and to reinforce their performance in the rehabilitation process [13, 14]. Indeed, several studies have shown that the integration of BCI with visual, auditory and/or haptic stimulation is very useful and increases the efficiency of the BCI system, since it enhances the effects of the brain oscillation induced by the implicit emotion and the explicit effects of the task [15, 16].
The goal of this work is to implement a prototype of a BCI with feedback based on a virtual environment, with the purpose of it being used as a support in rehabilitation processes in the upper extremities of the human body.
In addition, the work includes a popular type of game in which winning a high score and advancing as far as possible is the main goal. The decision to use a game in this study was made in order for the subject to focus on playing, rather than on the thought of being rehabilitated.
Examples of similar works that integrate additional resources or traditional movements include the following: [17], a floating virtual avatar featuring upper limb movements of shoulder flexion–extension, shoulder abduction–adduction, and elbow flexion–extension; [18] a motor imagery-based BCI interface for a virtual avatar featuring upper limb movement but which employs functional electrical stimulation (FES), [19] an interactive game with virtual reality, and [20] a rowing game where one collects flags for points in virtual reality. Others examples included [21] an experiment that consisted of voluntarily grasping and relaxing one hand to trigger the closing and opening of the orthosis placed on the opposite hand, [22] a BCI interface for controlling a mobile robot based on EEG analysis, and two mental tasks: relaxed state and motor imagination of the right hand, and [23] a game that consisted of hitting one of two targets: one employing motor imagination and one using virtual reality.
This paper is organized as follows. Section 2 describes the materials and methods deployed in this study. This includes subject information, experimental setup, data acquisition, signal preprocessing, feature extraction, classification methods, virtual environment creation, classification of signals, video game integration, experimental, calibration,training procedures, and final tests. In Sect. 3, the Results are presented, and Sect. 4 contains the Discussion. The Discussion section includes the Calibration Procedure and Training Procedure, Motion Classification, Video Game Integration Performance, Potential for the Prototype, Limitations, and Recommendations for Future Work. This work concludes with Sect. 5.
2 Materials and methods
This section describes the materials, the procedures carried out during the execution of the experiment with the participation of test subjects, and the methodology used for signal acquisition, pre-processing, feature extraction, classification, and the control command generation of the implemented system, along with the main elements of the implemented video game.
2.1 Subject ınformation
Healthy subjects participated in the experiments, with no history of neurological diseases. Six subjects were selected and gave their consent to participate in the tests. Each subject was given a task where they were asked to imagine that they were moving their arm in an upward motion in order to control the movement of the horizontal axis of a character in a video game. Demographic data of the subjects is shown in Table 1. The columns list number assigned to the subject: sex, age, dominant side (L-left, R,-right), and total number of hours in testing.
2.2 Experimental procedure
The study included a signal verification stage, a calibration stage, a training stage, and finally, an online session using the video game. It is important to mention that all calibration data was recorded by the authors for use in the on-line sessions, so no data from other sources was used. First, the testing protocol, device operation, calibration procedure, training procedure, and final tests were explained to the subject. Then, the equipment was fitted onto the subject’s head and the successful acquisition of the signals was verified through the use of a tool called OpenBCI GUI, which is a computer program that connects the OpenBCI hardware, visualizes the anatomical data, and transmits said data to other applications [24]. The signal verification process is shown in Fig. 5.
Afterwards, the calibration, training and final testing procedures were carried out as described in the following paragraphs.
2.2.1 Calibration procedure
Subjects were instructed to imagine that they were raising their arm while resting it on a table. This was done according to a text indication shown on a computer screen as presented in Fig. 1. A script provided by OpenBCI was used which was modified to present each marker (right or left) according to the native language of the participants. Additionally, the markers and cues were recorded using a program also provided by OpenBCI called "Lab Recorder". The program recorded the markers and cues in .xdf format, and this document stores the calibration data that is applied in the online processing and execution of the video game. Each participant was instructed to record calibration data 2 to 5 times. The calibration data was given by a series of 60 markers that were randomly displayed, each followed by a 10 s pause. The start of each series was marked by string ‘calib-begin’, and the end of each series was marked by string ‘calib-end’.
2.2.2 Training procedure
The operation and rules of the video game were explained to the subjects, after which they were given some time to familiarize themselves with the game voluntarily. In addition, during this stage, four implemented processing schemes were used in order to determinethe subject’s perception, define the performance of each system, and choose the scheme that had adapted best to the subject. Importantly, during the period of user interaction with the system, the basic perceptions of the system, the video game itself, and the overall adaptability of the system were continuously probed.
2.2.3 Final tests
Each subject chose the implementation they felt most comfortable with, (either the concordance of the imagined laterality or the laterality embodied in the system), and from there the challenge was to try to advance through the game as far as possible. Four of the six subjects voluntarily showed interest in repeating the game more than once during the final tests, the following were analyzed: the dynamics of the subjects, subject behavior in response to indications, questions that were asked of the subjects regarding the study, the video game itself, character survival times, classification thresholds, and dominant sides of the body. Figure 2 shows a subject during the final tests.
2.3 Experimental setup
The schematic diagram of the prototype is shown in Fig. 3.
The elements that are part of the processing implemented in the NeuroPype tool are presented in Fig. 4.
The technological components used for the impementation of the prototype are defined in Table 2. The system uses a FIR filter [25] as part of the processing [26, 27], a CSP algorithm [28,29,30], and LDA [31] as the method of classification [32]. These are elements that are part of the processing implemented in the NeuroPype tool as shown in Fig. 4.
2.4 Data acquisition and preprocessing
This section includes signal acquisition and signal pre-processing. It should be noted that before recording the EEG signals of each subject and running online tests, the signals measured with the 8-channel interface were visualized and tested in order to verify the correct placement of each of the electrodes, using the OpenBCI Graphical Subject Interface (GUI) program [24]. The verification of the placement of each of the electrodes is shown in Fig. 5, in which the impedance value is displayed, the same as when the source turns green, which indicates that the signal is being received correctly.
2.4.1 Signal acquisition
The EEG signals were recorded using an 8-channel neural interface and a Cap with EEG bio-potential measurements with wet electrodes as mentioned in Sect. 2.2. These signals were transmitted to the computer using a chipKIT™ bootloader, and the data was displayed at 250 Hz. The data was communicated to the computer using the OpenViBE Acquisition Server, a tool designed to communicate with various hardware signal acquisition devices through modules called drivers in a standardized, generic OpenViBE format [40]. The acquired signals were then selected and transmitted to Neuropype [39]. Figure 6 represents the design implemented in OpenViBE for signal acquisition:
Figure 4 shows the "LSL Input" node at the start of the flow. This node allows reading the multichannel stream from "LSL Export" in real time [41]. It was configured to read 64-bit float data, automatically synchronize the clock, and connect the stream given by the signal name.
2.4.2 Signal preprocessing
After the signals are acquired and transmitted using the LSL protocol, the preprocessing stage is executed. This signal preprocessing stage was implemented in Neuropype (its diagram is shown in Fig. 7) and included the following nodes [42]:
-
Dejitter Timestamps: used to synchronize the timing of event markers with the data.
-
Import XDF: Its function is to import previously recorded calibration data saved in .xdf format. This format is used because it can store one or more streams of multichannel, LSL-recorded time series data, such as EEG with marker data.
-
Inject Calibration: used to inject the recorded calibration data before transmitting the data taken in real time, so that the following nodes can do their corresponding processing.
-
Assign Targets: used to map markers containing event-related signal activity and assign numerical target values to these markers for use in ML.
-
Select Range: used to select a subset of the given data along the spatial axis.
-
FIR Filter: used to apply the FIR filter to the signal. That is, to select the signal frequency bands according to the ranges of an EEG wave [43]. This filter was used since it has less signal phase distortion in the region between the pass band (retained frequencies) and the stop band (suppressed frequencies) than the FIR filter [25, 27]. The node configuration is given to operate in band-pass mode where the frequencies that determine the damping curves are 6, 7, 30, and 32 according to EEG signal characteristics [44,45,46,47].
2.5 Feature extraction and classification of motor ıntention signals
Feature extraction and motor classification of signals were implemented in NeuroPype. The elements that make up this stage are described in Sects. 2.4.1 and 2.4.2
2.5.1 Feature extraction
Feature extraction was based on the application of the CSP mathematical procedure, which is a procedure to separate a multivariate signal into additive subcomponents that have maximum variance differences between two windows. The algorithm determines the component \({w}^{T}\) (see Eq. 1), thus the variance radius is maximized between the two windows [48,49,50]:
As can be seen in Fig. 8 this node is accompanied by the nodes “Segmentation”, “Variance”, and “Logarithm”.
The function of each of the nodes is as follows:
-
Segmentation: used to cut fixed-length segments from a continuous time series around each marker of interest. The data returned is a 3D array of extracted segments of the same length.
-
Variance: used to calculate the variance of the data on the time axis.
-
CSP: this node is used to extract the signal components whose variance will be used later in a binary classification configuration [29], since the resulting components usually offer better spectral characteristics than the raw channels, leading to higher classification accuracy [51, 52].
-
Logarithm: used to obtain the logarithm of each element of the data as a preliminary step for classification.
2.5.2 Classification of motor ıntention signals
To classify the subject’s motor intention, the MI technique LDA was employed (see Fig. 9), which serves as a method of classifying two or more classes within supervised learning. LDA is a fast statistical method that learns based on the linear mapping of the input data with categorical labels [53, 54]. It is important to mention that within its setup, this method needs to be calibrated ("trained") before it can make predictions on the data. To do this, the method needs training instances and associated training labels [31, 32, 55]. Within NeuroPype, the way to obtain these labels associated with the time series data is to include a stream of markers in the data, which in this case were imported along with the data using the import node, and injected using the Inject Calibration Data node. These markers were annotated with target labels using the Assign Targets node (Fig. 10).
2.6 Virtual environment: endless runner game
In order to have a virtual environment and visual and auditory feedback to reproduce the movement resulting from the brain-generated signals, a video game was developed. This game was based on a subgenre of platform games in which the player character is forced to run on a floating platform for an infinite amount of time while avoiding obstacles and collecting water droplets (represented as blue spheres) that increase the player’s score [56]. The main characteristics of the video game are presented in Table 3.
2.6.1 Classification of signals—endless game integration
To transmit the commands from NeuroPype to the video game, the Open Sound Control (OSC) communication protocol was used. The commands were transmitted as a vector, in a range between 0 and 1, and the video game code was adapted to translate the received range as − 1 (left) and 1 (right). To move the character to either side, it was necessary that the received control command be between 0.5 and 0.75 since those particular values mark the threshold of motor intention for each of the limbs (left and right). Figure 11 shows the node implemented in NeuroPype, which is called OSC Output. Its configuration was based on setting the IP, port number, and message address.
The OSC library was installed in Unity 3D and according to its configuration methods, the input protocol was coded to receive the values coming from NeuroPype in the script that controls the character’s movement. Finally, the video game code was adjusted to assign the values corresponding to each range and displacement desired by the subject by means of if-else structures.
3 Results
This section specifically presents the experimental results obtained from the implementation and subject tests.
3.1 Calibration procedure
The calibration data was recorded two to five times for each subject. According to the analysis in the training stage, the calibration data that showed the highest accuracy typically came from the second or third attempt for each subject. The subjects indicated that for the first calibration test, in general, they were more unfocused since they were trying to familiarize themselves with the methodology. Additionally, at the beginning of the tests the markers were presented in English, which confused the native Spanish speakers (5 of the 6 subjects). Therefore, the markers were changed to the subject’s native language in subsequent testing.
The tests were performed with calibration data of short duration only. Experimental results at the later (training) stage showed that with little calibration data applied to a MI method, the control commands turned out to be incorrectmost of the time; results that have been also shown in [60,61,62].
3.2 Motion classification accuracy
The performances of four processing models were implemented, tested and compared with their use for each subject: Fast Fourier Transform (FFT) as part of the feature extraction—with LDA as part of the signal classification stage, FFT with Linear Support Vector Machine (SVC), CPS with SVC, and CSP with LDA. For the first tests in the training stage, the subject imagined raising of one of their upper limbs in an upward motion and the researcher compared the results with the classification given by the system.
Then, in the second tests, the subject was asked to imagine raising either of their upper limbs according to the indications given by the researcher and to indicate at the end of the four rounds which of the models coresponded best to their motor intention. Likewise, in the second trials, the game survival times achieved by each subject were considered an important factor when choosing the model to be used in the final trials. The model chosen to continue to the final testing stage was the LDA classification method, with which the subjects achieved higher survival times and expressed feeling more comfortable. That is to say that LDA was the method that indicated the results that best matched the motor intention. The maximum survival time achieved and the total time each subject was in testing are shown in Table 4.
Since the LDA classification method is a MI method, it is not capable of being incrementally trained on stream data. Therefore, the method requires a data package containing all the training data as explained in [53, 54]; that is, it needs to be calibrated ("trained") before it can make predictions on the data, and needs training instances and associated training labels [31, 32, 55]. In order to obtain these labels associated with the time series data, a stream of markers was included in the data that was pre-recorded with "Lab Recorder" and imported using an import node. These markers were assigned a target label (1 or 0) using the Assign Targets node.
Finally, to generate training data instances for each of the training markers, and in this case, as part of the implementation in NeuroPype, it was important to use the Segmentation node in order to extract segments of the continuous time series around each marker. According to these parameters, we confirmed that when little calibration data is available, the final classification contains errors. Whenever too small an amount of calibration data was injected, the predictions failed. This is due to the fact that if there are too few trials, or some extensive stretches of the data exhibit only one class, the cross validation performed by the method fails, as also confirmed in [63,64,65].
As for changes that were made to parts of the processing, in this stage it was very important to vary the segmentation ranges (in NeuroPype, this is done using the node called "Segmentation" presented in Sect. 2.4.1. and in Fig. 12). This is because the movement imagined by each subject did not constitute the same speed or force. The variations in segmentation (Fig. 12) presented the following characteristics:
-
0.1–1.5: delayed and erroneous classifications in all tests.
-
0.5–1: erroneous and abruptly changing responses over time due to signal processing in small fragments.
-
0.5–> 2.5: mostly correct classifications, but with slow response (greater than 2 s) and therefore, not suitable for the video game.
-
0.5–1.5: correct classifications, and acceptable response speed for adjustment within the video game.
By default, the classification method does not return a specific prediction (right or left, but rather the probability for each label. It was necessary to change the control logic to move the character according to these probabilities in the game code. Even so, it was observed that although it was necessary to vary the thresholds for each subject according to the probable values delivered by the system, all the probability thresholds for each class were above 0.5. This indicated that for each limb, there is a pattern of probability of movement intention classification above 0.5.
3.3 Training procedure and classification of signals—video game ıntegration performance
Regarding the integration configured to transmit the data classified by the system to the video game, the OSC protocol worked properly according to the characteristics of the video game. This means that the data received in a vector type variable was translated into the code to generate the appropriate commands for the character (movement to the left and right), and this same data allowed the game to have the necessary fluidity of the character´s movement.
Finally, in terms of subject testing, subjects reported that for the first few rounds of play, they were trying to understand how to move the character along the platform, which caused them some confusion. However, after a few rounds (typically 3 or 4), subjects expressed that they felt more connection, understanding, and response to their movement intention, and four out of six subjects indicated that they were motivated to continue moving forward in the game.
4 Discussion
The study was conducted with healthy subjects, whose task was based on imagining the raising of each of their upper limbs in order to control the movement of a character on the horizontal axis of a video game, with the idea that such a system could be used as support in rehabilitation processes for the upper extremities of the human body. For this purpose, EEG signals were acquired through the use of a cap containing OpenBCI wet electrodes, and processed to generate control commands and trigger the horizontal movement of the character. For this study, the results were not only analyzed in terms of signal processing and classification performance, but also the sustainability of the application that each subject achieved in real time.
The following is a description of the evaluation process for the calibration procedures and training, the motor classification and integration with the video game, and finally the significance of the findings, the limitations of the study, and recommendations for future work.
4.1 Calibration procedure and training procedure
The calibration stage was fundamental not only to record the data to be injected into the prototype in order to train the system, but also to begin to allow the subject to interact with it, as also covered in [66,67,68,69,70,71]. Although at the beginning of the stage, subjects indicated some degree of confusion, subsequent recordings resulted in greater comfort and rapport with the procedure. Additionally, at the end of each recording of the reports given by the subjects, the following variables were analyzed: errors, possible future improvements, and variations that could be used in subsequent tests. A clear example of improvement for future work would be to ensure that the subjects are using the same limb movement (at the same speed and force) and to use tools that greatly limit variations in said force and speed for each round of testing.
It was also noted that when little calibration data is recorded, the probability that the system makes erroneous predictions increases [70, 72]. It should be remembered that calibration data for any MI method must have certain characteristics, such as balance in its markers to avoid biases in the predictions, no erroneous records, and a sufficient number of records. In spite of this, having totally clean data based on EEG signals is one of the major obstacles for the implementation of BCIs due to, among other factors, the tedious calibration process and the non-stationarity of the data. This can make it difficult to process brain signals and, consequently, to control any application that is designed [73,74,75].
4.2 Motion classification and video game integration performance
The choices of classifiers and signal processing techniques for the prototype were based on previously published systematic reviews [70, 76,77,78,79]. Even so, the final model for the prototype also depended on the observations made in the training stage with the subjects. It is important to emphasize that the prototype implemented in this work is a proof of concept implemented as a step to verify the usefulness and possible areas for improvement in the design of BCIs related to the efficiency, application, and motivation of subjects who use or coud use a BCI to support upper limb rehabilitation.
Regarding the integration method configured to transmit the data classified by the system to the video game, the OSC protocol allowed the researcher to properly generate the appropriate commands for the character (movement to the left and right) at a speed suitable for the application.
Subjects showed more motivation in the video game interaction stage than in the preliminary setup and training stages. This reaffirms findings presented in other work [16, 80,81,82] indicating that feedback in games (and other systems), as well as virtual environments are motivating, stimulating factors that contribute to magnitude improvement, patterns, and frequencies of EEG signals, as well as the level of attention and concentration. We note that said feedback enhances the training and connection between the BCI system and the subject.
4.3 Potential of the prototype, limitations, and recommendations for future work
The results obtained are a proof of concept that support the findings of other studies and highlight aspects of potential within the design and implementation of BCI in video games. Although the test subjects were all healthy and the sample size of this studywas not large, it was possible to identify both limitations and positive effects that mainly suggest the possibility of further improving the design and testing of the prototype.
Among the limitations and challenges observed and also mentioned in [83,84,85] were: significant variations in the EEG signals between the subjects (which can create errors during signal processing), variations in the EEG patterns over time and between each of the tests that can cause incompatibilities, and variations in the probabilities given by the system for each subject, which can cause the application (the video game) to need to be adjusted depending on the subject.
On the other hand, it was evidenced that the methods highlighted by previous studies [26, 28, 30, 86], such as CSP, FIR filters, and LDA achieve the desired classifications for EEG signals, which make them suitable to be included in certain applications. However, the processing system can be improved in terms of response times and prediction accuracy. In order to overcome these obstacles, for future work we intend to perform more detailed tests with a larger number of subjects, utilizing the help of physical rehabilitation professionals and using deep learning. According to studies in [14, 87,88,89,90], deep learning has shown to better handle complex, non-stationary, unstructured, noisy, and artifact-rich signals.
Likewise, and as a fundamental point of the training process and final tests with the subjects, it was noted that the feedback in an interface plays a crucial role, since it helps the person not only to feel more motivated to continue, but also to learn to generate an activity with a more consistent motor intention. This is because by providing feedback to the subject, they identify whether they have executed the mental task correctly, which consequently helps them to have more control of their brain activity, motor function, the BCI application, and to reinforce performance in the rehabilitation process as shown in other studies [12,13,14,15,16]. Indeed, the subjects reported feeling more comfortable, focused, and immersed when they were playing, compared to earlier tests where they were only instructed to imagine raising one of their upper limbs.
Finally, the characteristics of the game were also highly determinant when it came to subject immersion. This is in accordance with the fact that fundamental aspects in the design of video games such as music, sound effects, visuals, and game mechanics are factors that can also alter the results of tests and the motivation of the subjects [91]. However, the analysis of each of these factors, and thus the impact of results from attempts at improvement, can be addressed in subsequent work.
5 Conclusions
The BCI prototype implemented was based on the acquisition of EEG signals using an OpenBCI wet electrode cap, signal processing, subsequent classification using the NeuroPype program, and transmission of the classification data generated to a virtual environment (a video game), using code that included the translation of this data received as control commands for moving a character on an infinitely generating platform. The tests were performed with healthy subjects who engaged in trials of 1–5 h, and the maximum survival time in the game recorded by any participant was 65 s. The processing included stages where methods such as CSP, a FIR filter, and LDA were applied, the results of which were satisfactory. Our tests showed that there is the possibility of testing and likely implementing the prototype for people who require motor rehabilitation treatment, in addition to the fact that a video game as part of subject feedback can be highly motivating as mentioned in [92, 93]. Among the possible extensions of the work is the improvement of the classification speed of the prototype in order to use it in applications that require higher speeds, as well as changes to or variations of the video game according to technical patterns within the area of game development. Finally, another possibility of extension for this work is to be able to use the prototype with people who require rehabilitation treatment for their upper limbs and to be able to utilize virtual environments such as video games for said treatment.
Data availability
The authors confirm that the data supporting the findings are available from the corresponding author, upon reasonable request.
References
Wei-Peng T, Chew E (2014) Is motor-imagery brain-computer interface feasible in stroke rehabilitation? Epub 6(8):723–728. https://doi.org/10.1016/j.pmrj.2014.01.006
Mudgal SK, Sharma SK, Chaturvedi J, Sharma A (2020) Brain computer interface advancement in neurosciences: Applications and issues. Interdiscip Neurosurg. https://doi.org/10.1016/j.inat.2020.100694
Lotte F (2009) Study of Electroencephalographic Signal Processing. France
Wolpaw JR, Millán JDR, Ramsey NF (2020) Brain-computer interfaces: definitions and principles. Handb Clin Neurol 168:15–23. https://doi.org/10.1016/B978-0-444-63934-9.00002-0
Birbaumer N (2006) Breaking the silence: brain-computer interfaces (BCI) for communication and motor control. Psychophysiology 43(6):517–532. https://doi.org/10.1111/j.1469-8986.2006.00456.x
Abdulkader SN, Atia AM, Mostafa MS (2015) Brain computer interfacing: applications and challenges. Egyptian Inform J 16(2):213–230. https://doi.org/10.1016/j.eij.2015.06.002
Kosmyna N, Lécuyer A (2019) A conceptual space for EEG-based brain-computer interfaces. PLOS ONE. https://doi.org/10.1371/journal.pone.0210145
Dinarès-Ferran J, Ortner R, Guger C, Solé-Casals J (2018) A new method to generate artificial frames using the empirical mode decomposition for an EEG-based motor ımagery BCI. Front Neurosci. https://doi.org/10.3389/fnins.2018.00308
Nakra A, Duhan M (2022) Motor imagery EEG signal classification using long short-term memory deep network and neighbourhood component analysis. Int J İnf Tecnol. https://doi.org/10.1007/s41870-022-00866-4
Rukhsar S (2022) Discrimination of multi-class EEG signal in phase space of variability for epileptic seizure detection using error correcting output code (ECOC). Int J İnf Tecnol. https://doi.org/10.1007/s41870-018-0224-y
Patil S, Patil KR, Patil CR (2020) Performance overview of an artificial intelligence in biomedics: a systematic approach. Int J İnf Tecnol. https://doi.org/10.1007/s41870-018-0243-8
Lei B, Liu X, Liang S, Hang W, Wang Q, Choi KS et al (2019) Walking imagery evaluation in brain computer interfaces via a multi-view multi-level deep polynomial network. EEE Trans Neural Syst Rehabil Eng 27(3):497–506. https://doi.org/10.1109/TNSRE.2019.2895064
Do AH, Wang PT, King CE, Abiri A, Nenadic Z (2011) Brain-computer interface controlled functional electrical stimulation system for ankle movement. J Neuroeng Rehabil. https://doi.org/10.1186/1743-0003-8-49
Karácsony T, Hansen JP, Iversen HK, Puthusserypady S. 2019. Brain computer ınterface for neuro-rehabilitation with deep learning classification and virtual reality feedback. Proceedings of the 10th augmented human ınternational conference 2019;(22):1–8. https://doi.org/10.1145/3311823.3311864.
Schubring D, Kraus M, Stolz C, Weiler N, Keim DA, Schupp H (2020) Virtual reality potentiates emotion and task effects of alpha/beta brain oscillations. Brain Sci 10(8):537. https://doi.org/10.3390/brainsci10080537
Jiang L, Guan C, Zhang H, Wang C, Jiang B. 2011. Brain computer interface based 3D game for attention training and rehabilitation. 2011 6th IEEE conference on ındustrial electronics and applications. https://doi.org/10.1109/ICIEA.2011.5975562.
Fernández-Vargas J, Tarvainen TVJ, KaYW K (2017) Effects of using virtual reality and virtual avatar on hand motion reconstruction accuracy and brain activity. IEEE Access 5:23736–23750. https://doi.org/10.1109/ACCESS.2017.2766174
Miao Y, Chen S, Zhang X, Jin J, Xu R, Daly I et al (2020) BCI-based rehabilitation on the stroke in sequela stage. Neural Plast 2020:10. https://doi.org/10.1155/2020/8882764
Lin B, Hsu H, Jan GE, Chen J. 2016. An ınteractive upper-limb post-stroke rehabilitation system ıntegrating BCI-based attention monitoring and virtual reality feedback. third ınternational conference on computing measurement control and sensor network (CMCSN), Matsue. https://doi.org/10.1109/CMCSN.2016.33.
Vourvopoulos A, Jorge C, Abreu R, Figueiredo P, Fernandes J, Bermúdez I et al (2019) Efficacy and brain imaging correlates of an immersive motor imagery BCI-driven VR system for upper limb motor rehabilitation: a clinical case report. Front Hum Neurosci 13(244):1–17. https://doi.org/10.3389/fnhum.2019.00244
King C, Dave K, Wang P, Mizuta M, Reinkensmeyer D, Do A et al (2014) Performance assessment of a brain-computer interface driven hand orthosis. Ann Biomed Eng 42:2095–2105. https://doi.org/10.1007/s10439-014-1066-9
Vourvopoulos A, Bermúdez I, Badia S (2016) Motor priming in virtual reality can augment motor-imagery training efficacy in restorative brain-computer interaction: a within-subject analysis. J Neuroeng Rehabil. https://doi.org/10.1186/s12984-016-0173-2
Karácsony T, Hansen JP, Iversen HK, Puthusserypady S (2019) Brain computer ınterface for neuro-rehabilitation with deep learning classification and virtual reality feedback. Augmented Human International Conference (AH2019). https://doi.org/10.1145/33118233311864
OpenBCI (2022) OpenBCI. https://openbci.com/downloads. Accessed 1 Oct 2022
Hanna MT (1996) Design of linear phase FIR filters with a maximally flat passband. IEEE Trans Circuits Syst II 43(2):142–147. https://doi.org/10.1109/82.486462
Higashi H, Tanaka T (2013) Simultaneous design of FIR filter banks and spatial patterns for EEG signal classification. IEEE Trans Biomed Eng. https://doi.org/10.1109/TBME.2012.2215960
Nag A, Dhabal S, Das TP, Venkateswaran P. 2022. Design of band-pass fir filter using modified social group optimization algorithm and its ımplementation on FPGA. 2022 IEEE ınternational conference on signal processing, ınformatics, communication and energy systems (SPICES). 1:135–140. https://doi.org/10.1109/SPICES52834.2022.9774267.
Antony MJ, Sankaralingam BP, Mahendran RK, Gardezi AA, Shafiq M, Choi JG et al (2022) Classification of EEG using adaptive SVM classifier with CSP and online recursive independent component analysis. Sensors 22(19):7596. https://doi.org/10.3390/s22197596
Zhang S, Zhu Z, Zhang B, Feng B, Yu T, Li Z (2020) The CSP-based new features plus non-convex log sparse feature selection for motor ımagery EEG classification. Sensors. https://doi.org/10.3390/s20174749
Yang J, Ma Z, Shen T (2021) Multi-time and multi-band CSP motor imagery EEG feature classification algorithm. Appl Sci 11(21):10294. https://doi.org/10.3390/app112110294
Egwom OJ, Hassan M, Tanimu JJ, Hamada M (2022) An LDA–SVM machine learning model for breast. Biomedinformatics 2:345–358. https://doi.org/10.3390/biomedinformatics2030022
Ramírez-Arias FJ, García-Guerrero EE, Tlelo-Cuautle E, Colores-Vargas JM, García-Canseco E, López-Bonilla OR et al (2022) Evaluation of machine learning algorithms for classification of EEG signals. Technologies 10(4):79. https://doi.org/10.3390/technologies10040079
OpenBCI (2022) OpenBCI Shop. https://shop.openbci.com. Accessed 1 Oct 2022
Muzyka IM (2019) Bachir Estephan chapter 35—somatosensory evoked potentials. Handbook Clin Neurol 160:523–540. https://doi.org/10.1016/B978-0-444-64032-1.00035-7
Zheng F, Sato S, Mamada K, Ozaki N, Kubo J, Kakuda W (2022) EEG correlation coefficient change with motor task. Neurol Int 14:738–747. https://doi.org/10.3390/neurolint14030062
Reilly KT, Sirigu A (2011) Motor cortex representation of the upper-limb in ındividuals born without a hand. PLOS ONE. https://doi.org/10.1371/journal.pone.0018100
Campos ACd, Sukal-Moulton T, Huppert T, Alter K, Damiano DL (2020) Brain activation patterns underlying upper limb bilateral motor coordination in unilateral cerebral palsy: an fNIRS study. Dev Med Child Neurol 62(5):625–632. https://doi.org/10.1111/dmcn.14458
OpenVibe (2022) OpenVibe. http://openvibe.inria.fr. Accessed 1 Oct 2022
Intheon (2022) NeuroPype. https://www.neuropype.io. Accessed 1 Oct 2022
OpenVibe (2022) Acquisition Server. OpenVibe. http://openvibe.inria.fr/acquisition-server/. Accessed 1 Oct 2022
Labstreaminglayer (2022) Labstreaminglayer. https://labstreaminglayer.readthedocs.io/info/intro.html. Accessed 1 Oct 2022
NeuroPype (2022) NeuroPype. https://www.neuropype.io/docs/user_guide/index.html. Accessed 1 Oct 2022
Oshana R (2012) DSP for embedded and real-time systems. Elsevier, Amsterdam, pp 113–131
SampsaVanhatalo VJ, Kaila K (2005) Full-band EEG (FbEEG): an emerging standard in electroencephalography. Clin Neurophysiol 116(1):1–8. https://doi.org/10.1016/j.clinph.2004.09.015
Kumar JS, Kumar JS (2012) Analysis of electroencephalography (EEG) signals and its categorization–a study. Procedia Eng 38:2525–2536. https://doi.org/10.1016/j.proeng.2012.06.298
Plucińska R, Jędrzejewski K, Waligóra M, Malinowska U, Rogala J (2022) Impact of EEG frequency bands and data separation on the performance of person verification employing neural networks. Sensors. https://doi.org/10.3390/s22155529
Ji N, Ma L, Dong H, Zhang X (2019) EEG signals feature extraction based on DWT and EMD combined with approximate entropy. Brain Sci 8:9. https://doi.org/10.3390/brainsci9080201
Ai Q, Liu Q, Meng W, Xie SQ (2018) Chapter 6—EEG-based brain intention recognition in advanced rehabilitative technology. Elsevier, Amsterdam
Majidov I, Whangbo T (2019) Efficient classification of motor ımagery electroencephalography signals using deep learning methods. Sensors. https://doi.org/10.3390/s19071736
Antony MJ, Sankaralingam BP, Mahendran RK, Gardezi AA, Shafiq M, Choi JG et al (2022) Classification of EEG using adaptive SVM classifier with CSP. Sensors. https://doi.org/10.3390/s22197596
PremjitSingh N, Gautam AK, Sharan T (2022) 13 - An insight into the hardware and software aspects of a BCI system with focus on ultra-low power bulk driven OTA and Gm-C based filter design, and a detailed review of the recent AI/ML techniques. Artif Intell Brain Comput Interface. https://doi.org/10.1016/B978-0-323-91197-9.00015-1
Yu H, Lu H, Wang S, Xia K, Jiang Y, Qian P (2019) A general common spatial patterns for EEG analysis with applications to vigilance detection. IEEE Access 7:111102–111114. https://doi.org/10.1109/ACCESS.2019.2934519
Laport F, Castro PM, Dapena A, Vazquez-Araujo FJ, Iglesia D (2020) Study of machine learning techniques for EEG eye state detection. Proceedings 54(1):23. https://doi.org/10.3390/proceedings2020054053
Varone G, Boulila W, Giudice ML, Benjdira B, Mammone N, Ieracitano C et al (2022) A Machine learning approach involving functional connectivity features to classify Rest-EEG psychogenic non-epileptic seizures from healthy controls. Sensors 22(1):129. https://doi.org/10.3390/s22010129
Chen M, Wang Q, Li X (2018) Discriminant analysis with graph learning for hyperspectral image classification. Remote Sens 10(6):836. https://doi.org/10.3390/rs10060836
Jamil A, Murtza Z, Nazir MK, Waseem M, Ghulam Z, Farooq RU. 2019. A generic formal specification of an ınfinite runner games for handheld devices using Z-notation. 2019 IEEE 4th ınternational conference on computer and communication systems (ICCCS). https://doi.org/10.1109/CCOMS.2019.8821750.
SK-Studios (2022) Github. https://github.com/SK-Studios/3D-Endless-Runner-in-Unity. Accessed 1 Oct 2022
Mixkit (2022) Mixkit. https://mixkit.co. Accessed 1 Oct 2022
Chosic (2022) Chosic. https://www.chosic.com. Accessed 1 Oct 2022
Uçar MK, Nour M, Sindi H, Polat K (2020) The effect of training and testing process on machine learning in biomedical datasets. Math Probl Eng 2836236:17. https://doi.org/10.1155/2020/2836236
Sharareh R, Kalhori N, Zeng XJ (2014) Improvement the accuracy of six applied classification algorithms through ıntegrated supervised and unsupervised learning approach. J Comput Commun. https://doi.org/10.4236/jcc.2014.24027
Ayodele TO (2010) Machine learning overview. New Adv Mach Learn. https://doi.org/10.5772/9374
Refaeilzadeh P, Tang L, Liu H (2009) Cross-Validation. In: Ling LIU, Özsu MT (eds) Encyclopedia of database systems. Boston, Springer, pp 532–538
Amari S, Murata N, Muller KR, Finke M, Yang HH (1997) Asymptotic statistical theory of overtraining and cross-validation. IEEE Trans Neural Networks 8(5):985–996. https://doi.org/10.1109/72.623200
Santos MS, Soares JP, Abreu PH, Araujo H, Santos J (2018) Cross-validation for imbalanced datasets: avoiding overoptimistic and overfitting approaches [Research Frontier]. IEEE Comput Intell Mag 13(4):59–76. https://doi.org/10.1109/MCI.2018.2866730
Pillette L, N’Kaoua B, Sabau R, Glize B, Lotte F (2021) Multi-session influence of two modalities of feedback and their order of presentation on MI-BCI user training. Multimodal Technol Interact 5(3):12. https://doi.org/10.3390/mti5030012
Velasco-Álvarez F, Fernández-Rodríguez Á, Vizcaíno-Martín FJ, Díaz-Estrella A, Ron-Angevin R (2021) Brain-computer interface (BCI) control of a virtual assistant. Sensors 21(11):3716. https://doi.org/10.3390/s21113716
Peguero JDC, Mendoza-Montoya O, Antelis JM (2020) Single-option P300-BCI performance is affected by visual stimulation conditions. Sensors 20(24):7198. https://doi.org/10.3390/s20247198
Singh A, Guesgen SLW (2019) Reduce calibration time in motor imagery using spatially regularized symmetric positives-definite matrices based classification. Sensors 19(2):379. https://doi.org/10.3390/s19020379
Mridha MF, Das SC, Kabir MM, Lima AA, Islam MR, Watanobe Y (2021) Brain-computer interface: advancement and challenges. Sensors 21(17):5746. https://doi.org/10.3390/s21175746
Xu B, Li W, Liu D, Zhang K, Miao M, Xu G et al (2022) Continuous hybrid BCI control for robotic arm using noninvasive electroencephalogram, computer vision, and eye tracking. Mathematics 10(4):618. https://doi.org/10.3390/math10040618
Wu D, Xu Y, Lu BL (2022) Transfer learning for EEG-based brain-computer interfaces: a review of progress made since 2016. IEEE Trans Cognit Develop Syst 14(1):4–19. https://doi.org/10.1109/TCDS.2020.3007453
Saha S, Mamun KA, Ahmed K, Mostafa R, Naik GR, Darvishi S et al (2021) Progress in brain computer ınterface: challenges and opportunities. Front Syst Neurosci. https://doi.org/10.3389/fnsys.2021.578875
Khalaf A, Akcakaya M (2020) A probabilistic approach for calibration time reduction in hybrid EEG–fTCD brain–computer interfaces. BioMed Eng Online. https://doi.org/10.1186/s12938-020-00765-4
Huggins JE, Karlsson P, Warschausky SA (2022) Challenges of brain-computer interface facilitated cognitive assessment for children with cerebral palsy. Front Hum Neurosci. https://doi.org/10.3389/fnhum.2022.977042
Rasheed S (2021) A review of the role of machine learning techniques towards brain-computer interface applications. Mach Learn Knowl Extr 3(4):835–862. https://doi.org/10.3390/make3040042
Palumbo A, Gramigna V, Calabrese B, Ielpo N (2021) Motor-imagery EEG-based BCIs in wheelchair movement and control: a systematic literature review. Sensors 18(6285):21. https://doi.org/10.3390/s21186285
Prashant P, Joshi A, Gandhi V. 2015 Brain computer interface: a review. 2015 5th Nirma University International Conference on Engineering (NUiCONE). https://doi.org/10.1109/NUICONE.2015.7449615.
Camargo-Vargas D, Callejas-Cuervo M, Mazzoleni S (2021) Brain-computer interfaces systems for upper and lower limb rehabilitation: a systematic review. Sensors 21(13):4312. https://doi.org/10.3390/s21134312
Choi H, Lim H, Kim JW, Kang YJ, Ku J (2019) Brain computer interface-based action observation game enhances mu suppression in patients with stroke. Electronics 8(12):1466. https://doi.org/10.3390/electronics8121466
Cattan G, Andreev A, Visinoni E (2020) Recommendations for integrating a P300-based brain-computer interface in virtual reality environments for gaming: an update. Computers 9(4):92. https://doi.org/10.3390/computers9040092
Värbu K, Muhammad N, Muhammad Y (2022) Past, present, and future of EEG-based BCI applications. Sensors 22(9):3331. https://doi.org/10.3390/s22093331
Martisius I (2016) Data acquisition and signal processing methods for brain—computer interfaces doctoral dissertation Kaunas. Kaunas University of Technology, Lithuania
Lotte F, Jeunet C, Chavarriaga R, Bougrain L, Thompson DE, Scherer R et al (2019) Turning negative into positives! Exploiting ‘negative’ results in brain-machine interface (BMI) research. Workshops of the eighth international brain–computer interface meeting: BCIs: the next frontier. Brain Comput Interfaces 9(2):69–101. https://doi.org/10.1080/2326263X.2019.1697143
Baek HJ, Chang MH, Heo J, Park KS (2019) Enhancing the usability of brain-computer interface systems. Comput Intell Neurosci 2019:12. https://doi.org/10.1155/2019/5427154
Aggarwal S, Chugh N (2019) Signal processing techniques for motor imagery brain computer interface: a review. Array. https://doi.org/10.1016/j.array.2019.100003
Lei B, Liu X, Liang S, Hang W, Wang Q, Choi KS et al (2019) Walking imagery evaluation in brain computer interfaces via a multi-view multi-level deep polynomial network. IEEE Trans Neural Syst Rehabil Eng 27(3):497–506. https://doi.org/10.1109/TNSRE.2019.2895064
Choi WS, Yeom HG (2022) Studies to overcome brain-computer interface challenges. Appl Sci 12(5):2598. https://doi.org/10.3390/app12052598
Nakra A, Duhan M (2022) Brain computer interfacing system using grey wolf optimizer and deep neural networks. Int J İnf Tecnol. https://doi.org/10.1007/s41870-022-01066-w
Nakra A, Duhan M (2023) Deep neural network with harmony search based optimal feature selection of EEG signals for motor imagery classification. Int J İnf Tecnol. https://doi.org/10.1007/s41870-021-00857-x
Loup-Escande E, Lotte F, Loup G, Lécuyer A (2015) User-centred BCI videogame design. In: Nakatsu Ryohei, Rauterberg Matthias, Ciancarini Paolo (eds) Handbook of digital games and entertainment technologies. Springer, Singapore, pp 1–26
Db T, Nouchi R, Kawashima R (2019) Does video gaming have impacts on the brain: evidence from a systematic review. Brain Sci 9(10):251. https://doi.org/10.3390/brainsci9100251
Paszkiel S, Rojek R, Lei N, Castro MA (2021) A pilot study of game design in the unity environment as an example of the use of neurogaming on the basis of brain-computer interface technology to improve concentration. NeuroSci 2(2):109–119. https://doi.org/10.3390/neurosci2020007
Acknowledgements
This work was supported by the Software Research Group GIS from the School of Computer Science, Engineering Department, Universidad Pedagógica y Tecnológica de Colombia (UPTC).
Funding
This study was funded by Universidad Pedagógica y Tecnológica de Colombia (project number SGI 3303) and the APC was funded by the same institution.
Author information
Authors and Affiliations
Contributions
Conceptualization, DC-V and MC-C; methodology, DC-V and MC-C; software, DC-V; validation, DC-V, MC-C and ACA-A; formal analysis, DC-V; investigation, DC-V, MC-C and ACA-A; resources, MC-C; data curation, DC-V, MC-C and ACA-A; writing—original draft preparation, DC-V; writing—review and editing: DC-V, MC-C and ACA-A; visualization, DC-V; supervision, MC-C and ACA-A; project administration, MC-C; funding acquisition, MC-C. All authors have read and agreed to the published version of the manuscript.
Corresponding author
Ethics declarations
Conflicts of ınterest
The authors declare no conflict of interest.
Informed consent
Informed consent was obtained from all subjects involved in the study.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Camargo-Vargas, D., Callejas-Cuervo, M. & Alarcón-Aldana, A.C. Brain-computer interface prototype to support upper limb rehabilitation processes in the human body. Int. j. inf. tecnol. 15, 3655–3667 (2023). https://doi.org/10.1007/s41870-023-01400-w
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s41870-023-01400-w