Brain-computer interface prototype to support upper limb rehabilitation processes in the human body

The high potential for creating brain-computer interfaces (BCIs) and video games for upper limb rehabilitation has been demonstrated in recent years. In this work, we describe the implementation of a prototype BCI with feedback based on a virtual environment to control the lateral movement of a character by predicting the subject’s motor intention. The electroencephalographic signals were processed employing a Finite Impulse Response (FIR) filter, Common Spatial Patterns (CSP), and Linear Discriminant Analysis (LDA). Also, a video game was used as a virtual environment, which was written in C# on the Unity3D platform. The test results showed that the prototype implemented based on electroencephalographic signal acquisition has the potential to take on real-time applications such as avatar control or assistive devices, obtaining a maximum control time of 65 s. In addition, it was noticed that the feedback in an interface plays a crucial role, since it helps the person not only to feel motivated, but also to learn how to have a more consistent motor intention and when little calibration data is recorded, the probability that the system makes erroneous predictions increases. These results demonstrate the usefulness of the development as support for people who require some treatment in the form of upper limb motor rehabilitation, and that the use of virtual environments, such as video games, can motivate such people during the rehabilitation processes.


Introduction
One of the main problems when undergoing traditional rehabilitation in the upper extremities of the human body is related to their efficiency and the motivation of the patient [1]. The development of BCIs opens up the possibility of creating systems that are more efficient than traditional procedures, since they have the ability to interpret the subject's motor intentions, allow for their training and that of the BCI system, maintain their attention, and contribute to increasing their motivation [2][3][4][5].
However, there are certain obstacles when designing and implementing BCI systems that must be taken into account during system development [6]. These obstacles are related to: neurological problems, technological difficulties, ethical concerns, non-stationarity of data from the same subject (changes in signal patterns over time), signal acquisition hardware, information transfer rate, and even the training process itself [2,[7][8][9][10][11].
Regarding the training process, it has been identified that the feedback in an interface helps the person to learn to generate an activity whose motor intention is more Mauro Callejas-Cuervo and Andrea Catherine Alarcón-Aldana contributed equally to this work. 3656 Int. j. inf. tecnol. (October 2023) 15 (7):3655-3667 1 3 consistent and therefore is essential to provide the subject with the feedback to identify if they have executed the mental task correctly [12]. This will consequently help the person to have better control of their brain activity, their motor functions, the BCI application, and to reinforce their performance in the rehabilitation process [13,14]. Indeed, several studies have shown that the integration of BCI with visual, auditory and/or haptic stimulation is very useful and increases the efficiency of the BCI system, since it enhances the effects of the brain oscillation induced by the implicit emotion and the explicit effects of the task [15,16].
The goal of this work is to implement a prototype of a BCI with feedback based on a virtual environment, with the purpose of it being used as a support in rehabilitation processes in the upper extremities of the human body.
In addition, the work includes a popular type of game in which winning a high score and advancing as far as possible is the main goal. The decision to use a game in this study was made in order for the subject to focus on playing, rather than on the thought of being rehabilitated.
Examples of similar works that integrate additional resources or traditional movements include the following: [17], a floating virtual avatar featuring upper limb movements of shoulder flexion-extension, shoulder abduction-adduction, and elbow flexion-extension; [18] a motor imagery-based BCI interface for a virtual avatar featuring upper limb movement but which employs functional electrical stimulation (FES), [19] an interactive game with virtual reality, and [20] a rowing game where one collects flags for points in virtual reality. Others examples included [21] an experiment that consisted of voluntarily grasping and relaxing one hand to trigger the closing and opening of the orthosis placed on the opposite hand, [22] a BCI interface for controlling a mobile robot based on EEG analysis, and two mental tasks: relaxed state and motor imagination of the right hand, and [23] a game that consisted of hitting one of two targets: one employing motor imagination and one using virtual reality. This paper is organized as follows. Section 2 describes the materials and methods deployed in this study. This includes subject information, experimental setup, data acquisition, signal preprocessing, feature extraction, classification methods, virtual environment creation, classification of signals, video game integration, experimental, calibration,training procedures, and final tests. In Sect. 3, the Results are presented, and Sect. 4 contains the Discussion. The Discussion section includes the Calibration Procedure and Training Procedure, Motion Classification, Video Game Integration Performance, Potential for the Prototype, Limitations, and Recommendations for Future Work. This work concludes with Sect. 5.

Materials and methods
This section describes the materials, the procedures carried out during the execution of the experiment with the participation of test subjects, and the methodology used for signal acquisition, pre-processing, feature extraction, classification, and the control command generation of the implemented system, along with the main elements of the implemented video game.

Subject ınformation
Healthy subjects participated in the experiments, with no history of neurological diseases. Six subjects were selected and gave their consent to participate in the tests. Each subject was given a task where they were asked to imagine that they were moving their arm in an upward motion in order to control the movement of the horizontal axis of a character in a video game. Demographic data of the subjects is shown in Table 1. The columns list number assigned to the subject: sex, age, dominant side (L-left, R,-right), and total number of hours in testing.

Experimental procedure
The study included a signal verification stage, a calibration stage, a training stage, and finally, an online session using the video game. It is important to mention that all calibration data was recorded by the authors for use in the on-line sessions, so no data from other sources was used. First, the testing protocol, device operation, calibration procedure, training procedure, and final tests were explained to the subject. Then, the equipment was fitted onto the subject's head and the successful acquisition of the signals was verified through the use of a tool called OpenBCI GUI, which is a computer program that connects the OpenBCI hardware, visualizes the anatomical data, and transmits said data to other applications [24]. The signal verification process is shown in Fig. 5.
Afterwards, the calibration, training and final testing procedures were carried out as described in the following paragraphs.

Calibration procedure
Subjects were instructed to imagine that they were raising their arm while resting it on a table. This was done according to a text indication shown on a computer screen as presented in Fig. 1. A script provided by OpenBCI was used which was modified to present each marker (right or left) according to the native language of the participants. Additionally, the markers and cues were recorded using a program also provided by OpenBCI called "Lab Recorder". The program recorded the markers and cues in .xdf format, and this document stores the calibration data that is applied in the online processing and execution of the video game. Each participant was instructed to record calibration data 2 to 5 times. The calibration data was given by a series of 60 markers that were randomly displayed, each followed by a 10 s pause. The start of each series was marked by string 'calib-begin', and the end of each series was marked by string 'calib-end'.

Training procedure
The operation and rules of the video game were explained to the subjects, after which they were given some time to familiarize themselves with the game voluntarily. In addition, during this stage, four implemented processing schemes were used in order to determinethe subject's perception, define the performance of each system, and choose the scheme that had adapted best to the subject. Importantly, during the period of user interaction with the system, the basic perceptions of the system, the video game itself, and the overall adaptability of the system were continuously probed.

Final tests
Each subject chose the implementation they felt most comfortable with, (either the concordance of the imagined laterality or the laterality embodied in the system), and from there the challenge was to try to advance through the game as far as possible. Four of the six subjects voluntarily showed interest in repeating the game more than once during the final tests, the following were analyzed: the dynamics of the subjects, subject behavior in response to indications, questions that were asked of the subjects regarding the study, the video game itself, character survival times, classification thresholds, and dominant sides of the body. Figure 2 shows a subject during the final tests.

Experimental setup
The schematic diagram of the prototype is shown in Fig. 3. The elements that are part of the processing implemented in the NeuroPype tool are presented in Fig. 4.
The technological components used for the impementation of the prototype are defined in Table 2. The system uses   a FIR filter [25] as part of the processing [26,27], a CSP algorithm [28][29][30], and LDA [31] as the method of classification [32]. These are elements that are part of the processing implemented in the NeuroPype tool as shown in Fig. 4.

Data acquisition and preprocessing
This section includes signal acquisition and signal preprocessing. It should be noted that before recording the EEG signals of each subject and running online tests, the signals measured with the 8-channel interface were visualized and tested in order to verify the correct placement of each of the electrodes, using the OpenBCI Graphical Subject Interface (GUI) program [24]. The verification of the placement of each of the electrodes is shown in Fig. 5, in which the impedance value is displayed, the same as when the source turns green, which indicates that the signal is being received correctly. 8-channel neural interface with a 32-bit processor (PIC32MX250F128B microcontroller, chipKIT™ bootloader, data is sampled at 250 Hz on each of the eight channels of the Cyton board) [33]. The channels acquired were: C3, CZ, C4, P3, PZ, P4, O1, O2, GND, and REF [34]. Based on the 10-20 system EEG and the location of motor regions for upper limbs [35][36][37] EEG electrode cap kit Cap with EEG bio-potential measurements with wet electrodes openvibe Software for the design and testing of BCIs. The package includes a signal acquisition tool and an application design tool [38] NeuroPype A platform for real-time brain-computer interfacing, neuroimaging, and bio/neural signal processing. It includes an opensource visual pipeline designer and tools for interfacing with diverse sensor hardware and recording data [39] Fig. 5 Visualization and testing of the measured EEG signals in the OpenBCI GUI program

Signal acquisition
The EEG signals were recorded using an 8-channel neural interface and a Cap with EEG bio-potential measurements with wet electrodes as mentioned in Sect. 2.2. These signals were transmitted to the computer using a chipKIT™ bootloader, and the data was displayed at 250 Hz. The data was communicated to the computer using the OpenViBE Acquisition Server, a tool designed to communicate with various hardware signal acquisition devices through modules called drivers in a standardized, generic OpenViBE format [40]. The acquired signals were then selected and transmitted to Neuropype [39]. Figure 6 represents the design implemented in OpenViBE for signal acquisition: Figure 4 shows the "LSL Input" node at the start of the flow. This node allows reading the multichannel stream from "LSL Export" in real time [41]. It was configured to read 64-bit float data, automatically synchronize the clock, and connect the stream given by the signal name.

Signal preprocessing
After the signals are acquired and transmitted using the LSL protocol, the preprocessing stage is executed. This signal preprocessing stage was implemented in Neuropype (its diagram is shown in Fig. 7) and included the following nodes [42]: • Dejitter Timestamps: used to synchronize the timing of event markers with the data. • Import XDF: Its function is to import previously recorded calibration data saved in .xdf format. This format is used because it can store one or more streams of multichannel, LSL-recorded time series data, such as EEG with marker data. • Inject Calibration: used to inject the recorded calibration data before transmitting the data taken in real time, so that the following nodes can do their corresponding processing. • Assign Targets: used to map markers containing eventrelated signal activity and assign numerical target values to these markers for use in ML. • Select Range: used to select a subset of the given data along the spatial axis. • FIR Filter: used to apply the FIR filter to the signal. That is, to select the signal frequency bands according to the ranges of an EEG wave [43]. This filter was used since it has less signal phase distortion in the region between the pass band (retained frequencies) and the stop band (suppressed frequencies) than the FIR filter [25,27]. The node configuration is given to operate in band-pass mode where the frequencies that determine the damping curves are 6, 7, 30, and 32 according to EEG signal characteristics [44][45][46][47].

Feature extraction and classification of motor ıntention signals
Feature extraction and motor classification of signals were implemented in NeuroPype. The elements that make up this stage are described in Sects. 2.4.1 and 2.4.2

Feature extraction
Feature extraction was based on the application of the CSP mathematical procedure, which is a procedure to separate a multivariate signal into additive subcomponents that have maximum variance differences between two windows. The algorithm determines the component w T (see Eq. 1), thus the variance radius is maximized between the two windows [48][49][50]: As can be seen in Fig. 8 this node is accompanied by the nodes "Segmentation", "Variance", and "Logarithm".
The function of each of the nodes is as follows: • Segmentation: used to cut fixed-length segments from a continuous time series around each marker of interest. The data returned is a 3D array of extracted segments of the same length. • Variance: used to calculate the variance of the data on the time axis. • CSP: this node is used to extract the signal components whose variance will be used later in a binary classification configuration [29], since the resulting components usually offer better spectral characteristics than the raw channels, leading to higher classification accuracy [51,52]. • Logarithm: used to obtain the logarithm of each element of the data as a preliminary step for classification.

Classification of motor ıntention signals
To classify the subject's motor intention, the MI technique LDA was employed (see Fig. 9), which serves as a method of classifying two or more classes within supervised learning. LDA is a fast statistical method that learns based on the linear mapping of the input data with categorical labels [53,54]. It is important to mention that within its setup, this method needs to be calibrated ("trained") before it can make predictions on the data. To do this, the method needs training instances and associated training labels [31,32,55]. Within NeuroPype, the way to obtain these labels associated with the time series data is to include a stream of markers in the data, which in this case were imported along with the data using the import node, and injected using the Inject Calibration Data node. These markers were annotated with target labels using the Assign Targets node (Fig. 10).

Virtual environment: endless runner game
In order to have a virtual environment and visual and auditory feedback to reproduce the movement resulting from the brain-generated signals, a video game was developed. This game was based on a subgenre of platform games in which the player character is forced to run on a floating platform for an infinite amount of time while avoiding obstacles and collecting water droplets (represented as blue spheres) that increase the player's score [56]. The main characteristics of the video game are presented in Table 3.

Classification of signals-endless game integration
To transmit the commands from NeuroPype to the video game, the Open Sound Control (OSC) communication protocol was used. The commands were transmitted as a vector, in a range between 0 and 1, and the video game code was adapted to translate the received range as − 1 (left) and 1 (right). To move the character to either side, it was necessary that the received control command be between 0.5 and 0.75 since those particular values mark the threshold of motor intention for each of the limbs (left and right). Figure 11 shows the node implemented in NeuroPype, which is called OSC Output. Its configuration was based on setting the IP, port number, and message address. The OSC library was installed in Unity 3D and according to its configuration methods, the input protocol was coded to receive the values coming from NeuroPype in the script that controls the character's movement. Finally, the video game code was adjusted to assign the values corresponding to each range and displacement desired by the subject by means of if-else structures.

Results
This section specifically presents the experimental results obtained from the implementation and subject tests.

Calibration procedure
The calibration data was recorded two to five times for each subject. According to the analysis in the training stage, the  The game was written in C# language on the Unity3D platform. The code base was taken from GitHub [57] and modified by the authors according to experimental requirements and game design Description The game belongs to a subgenre of platform games in which the player character runs for an infinite amount of time until making a mistake and receiving a "game over". It was implemented in a 3D graphical format and uses procedural generation. For this reason, the game environment appears to be continuously generated in front of the player Goal The object of the game is to achieve a high score by surviving as long as possible (not falling off the floating platform) and obtaining as many spheres (water droplets) as possible Score and levels The game design uses two types of scoring. The first type refers to scoring points each time the player catches water droplets, and the second refers to scoring distance points by surviving as long as possible (see Fig. 10,). As well, the level is shown to the player according to the sum of the points (water drops) obtained. With every 20 droplets of water caught, the level is increased by 1 Mechanics Through motor imagery, the player thinks about raising their arm. Depending on which arm is being focused on, the character in the game moves on the horizontal axis to the left or to the right accordingly. The character must advance through the environment by moving on the horizontal axis without touching any of the logs (obstacles). When the character touches a log, the game ends and restarts. The game will also end when the character falls off either of the lateral sides of the platform surface. Instructions are presented at the bottom of Fig. 10, by clicking on "Help" Control The control is determined by motor imagery. The classification made by the implemented system transmits the control commands in ranges from 0 to 1 for each upper limb (left or right). The ranks are then processed in the game code and assigned to the horizontal movement of the character depending on thresholds previously analyzed and adjusted in code. To move the character to either side, it is necessary that the control command received is between the values of 0.5 and 0.75. If the two values received are outside these ranges, the control command given to the character will tell it to remain in place on the horizontal axis Sound effects The sound effects were taken from [58]. Each time the character catches a water droplet, touches a log, or falls off the platform, a corresponding sound is played Music Background music is continuously played, which was taken from [59] Graphics The graphics were designed entirely in Unity 3D-editor version 2020. 3.33f1 calibration data that showed the highest accuracy typically came from the second or third attempt for each subject. The subjects indicated that for the first calibration test, in general, they were more unfocused since they were trying to familiarize themselves with the methodology. Additionally, at the beginning of the tests the markers were presented in English, which confused the native Spanish speakers (5 of the 6 subjects). Therefore, the markers were changed to the subject's native language in subsequent testing.
The tests were performed with calibration data of short duration only. Experimental results at the later (training) stage showed that with little calibration data applied to a MI method, the control commands turned out to be incorrectmost of the time; results that have been also shown in [60][61][62].

Motion classification accuracy
The performances of four processing models were implemented, tested and compared with their use for each subject: Fast Fourier Transform (FFT) as part of the feature extraction-with LDA as part of the signal classification stage, FFT with Linear Support Vector Machine (SVC), CPS with SVC, and CSP with LDA. For the first tests in the training stage, the subject imagined raising of one of their upper limbs in an upward motion and the researcher compared the results with the classification given by the system.
Then, in the second tests, the subject was asked to imagine raising either of their upper limbs according to the indications given by the researcher and to indicate at the end of the four rounds which of the models coresponded best to their motor intention. Likewise, in the second trials, the game survival times achieved by each subject were considered an important factor when choosing the model to be used in the final trials. The model chosen to continue to the final testing stage was the LDA classification method, with which the subjects achieved higher survival times and expressed feeling more comfortable. That is to say that LDA was the method that indicated the results that best matched the motor intention. The maximum survival time achieved and the total time each subject was in testing are shown in Table 4.
Since the LDA classification method is a MI method, it is not capable of being incrementally trained on stream data. Therefore, the method requires a data package containing all the training data as explained in [53,54]; that is, it needs to be calibrated ("trained") before it can make predictions on the data, and needs training instances and associated training labels [31,32,55]. In order to obtain these labels associated with the time series data, a stream of markers was included in the data that was pre-recorded with "Lab Recorder" and imported using an import node. These markers were assigned a target label (1 or 0) using the Assign Targets node.
Finally, to generate training data instances for each of the training markers, and in this case, as part of the implementation in NeuroPype, it was important to use the Segmentation node in order to extract segments of the continuous time series around each marker. According to these parameters, we confirmed that when little calibration data is available, the final classification contains errors. Whenever too small an amount of calibration data was injected, the predictions failed. This is due to the fact that if there are too few trials, or some extensive stretches of the data exhibit only one class, the cross validation performed by the method fails, as also confirmed in [63][64][65].
As for changes that were made to parts of the processing, in this stage it was very important to vary the segmentation ranges (in NeuroPype, this is done using the node called "Segmentation" presented in Sect. 2.4.1. and in Fig. 12). This is because the movement imagined by each subject did not constitute the same speed or force. The variations in segmentation (Fig. 12) presented the following characteristics: • 0.1-1.5: delayed and erroneous classifications in all tests. • 0.5-1: erroneous and abruptly changing responses over time due to signal processing in small fragments. • 0.5-> 2.5: mostly correct classifications, but with slow response (greater than 2 s) and therefore, not suitable for the video game. • 0.5-1.5: correct classifications, and acceptable response speed for adjustment within the video game.   )   1  30  3  2  50  5  3  20  2  4  10  2  5  15  1  6  65  1 By default, the classification method does not return a specific prediction (right or left, but rather the probability for each label. It was necessary to change the control logic to move the character according to these probabilities in the game code. Even so, it was observed that although it was necessary to vary the thresholds for each subject according to the probable values delivered by the system, all the probability thresholds for each class were above 0.5. This indicated that for each limb, there is a pattern of probability of movement intention classification above 0.5.

Training procedure and classification of signalsvideo game ıntegration performance
Regarding the integration configured to transmit the data classified by the system to the video game, the OSC protocol worked properly according to the characteristics of the video game. This means that the data received in a vector type variable was translated into the code to generate the appropriate commands for the character (movement to the left and right), and this same data allowed the game to have the necessary fluidity of the character´s movement. Finally, in terms of subject testing, subjects reported that for the first few rounds of play, they were trying to understand how to move the character along the platform, which caused them some confusion. However, after a few rounds (typically 3 or 4), subjects expressed that they felt more connection, understanding, and response to their movement intention, and four out of six subjects indicated that they were motivated to continue moving forward in the game.

Discussion
The study was conducted with healthy subjects, whose task was based on imagining the raising of each of their upper limbs in order to control the movement of a character on the horizontal axis of a video game, with the idea that such a system could be used as support in rehabilitation processes for the upper extremities of the human body. For this purpose, EEG signals were acquired through the use of a cap containing OpenBCI wet electrodes, and processed to generate control commands and trigger the horizontal movement of the character. For this study, the results were not only analyzed in terms of signal processing and classification performance, but also the sustainability of the application that each subject achieved in real time.
The following is a description of the evaluation process for the calibration procedures and training, the motor classification and integration with the video game, and finally the significance of the findings, the limitations of the study, and recommendations for future work.

Calibration procedure and training procedure
The calibration stage was fundamental not only to record the data to be injected into the prototype in order to train the system, but also to begin to allow the subject to interact with it, as also covered in [66][67][68][69][70][71]. Although at the beginning of the stage, subjects indicated some degree of confusion, subsequent recordings resulted in greater comfort and rapport with the procedure. Additionally, at the end of each recording of the reports given by the subjects, the following variables were analyzed: errors, possible future improvements, and variations that could be used in subsequent tests. A clear example of improvement for future work would be to ensure that the subjects are using the same limb movement (at the same speed and force) and to use tools that greatly limit variations in said force and speed for each round of testing.
It was also noted that when little calibration data is recorded, the probability that the system makes erroneous predictions increases [70,72]. It should be remembered that calibration data for any MI method must have certain characteristics, such as balance in its markers to avoid biases in the predictions, no erroneous records, and a sufficient number of records. In spite of this, having totally clean data based on EEG signals is one of the major obstacles for the implementation of BCIs due to, among other factors, the tedious calibration process and the non-stationarity of the data. This can make it difficult to process brain signals and, consequently, to control any application that is designed [73][74][75].

Motion classification and video game integration performance
The choices of classifiers and signal processing techniques for the prototype were based on previously published systematic reviews [70,[76][77][78][79]. Even so, the final model for the prototype also depended on the observations made in the Fig. 12 Configuration of the "Segmentation" node training stage with the subjects. It is important to emphasize that the prototype implemented in this work is a proof of concept implemented as a step to verify the usefulness and possible areas for improvement in the design of BCIs related to the efficiency, application, and motivation of subjects who use or coud use a BCI to support upper limb rehabilitation.
Regarding the integration method configured to transmit the data classified by the system to the video game, the OSC protocol allowed the researcher to properly generate the appropriate commands for the character (movement to the left and right) at a speed suitable for the application.
Subjects showed more motivation in the video game interaction stage than in the preliminary setup and training stages. This reaffirms findings presented in other work [16,[80][81][82] indicating that feedback in games (and other systems), as well as virtual environments are motivating, stimulating factors that contribute to magnitude improvement, patterns, and frequencies of EEG signals, as well as the level of attention and concentration. We note that said feedback enhances the training and connection between the BCI system and the subject.

Potential of the prototype, limitations, and recommendations for future work
The results obtained are a proof of concept that support the findings of other studies and highlight aspects of potential within the design and implementation of BCI in video games. Although the test subjects were all healthy and the sample size of this studywas not large, it was possible to identify both limitations and positive effects that mainly suggest the possibility of further improving the design and testing of the prototype. Among the limitations and challenges observed and also mentioned in [83][84][85] were: significant variations in the EEG signals between the subjects (which can create errors during signal processing), variations in the EEG patterns over time and between each of the tests that can cause incompatibilities, and variations in the probabilities given by the system for each subject, which can cause the application (the video game) to need to be adjusted depending on the subject.
On the other hand, it was evidenced that the methods highlighted by previous studies [26,28,30,86], such as CSP, FIR filters, and LDA achieve the desired classifications for EEG signals, which make them suitable to be included in certain applications. However, the processing system can be improved in terms of response times and prediction accuracy. In order to overcome these obstacles, for future work we intend to perform more detailed tests with a larger number of subjects, utilizing the help of physical rehabilitation professionals and using deep learning. According to studies in [14,[87][88][89][90], deep learning has shown to better handle complex, non-stationary, unstructured, noisy, and artifactrich signals.
Likewise, and as a fundamental point of the training process and final tests with the subjects, it was noted that the feedback in an interface plays a crucial role, since it helps the person not only to feel more motivated to continue, but also to learn to generate an activity with a more consistent motor intention. This is because by providing feedback to the subject, they identify whether they have executed the mental task correctly, which consequently helps them to have more control of their brain activity, motor function, the BCI application, and to reinforce performance in the rehabilitation process as shown in other studies [12][13][14][15][16]. Indeed, the subjects reported feeling more comfortable, focused, and immersed when they were playing, compared to earlier tests where they were only instructed to imagine raising one of their upper limbs.
Finally, the characteristics of the game were also highly determinant when it came to subject immersion. This is in accordance with the fact that fundamental aspects in the design of video games such as music, sound effects, visuals, and game mechanics are factors that can also alter the results of tests and the motivation of the subjects [91]. However, the analysis of each of these factors, and thus the impact of results from attempts at improvement, can be addressed in subsequent work.

Conclusions
The BCI prototype implemented was based on the acquisition of EEG signals using an OpenBCI wet electrode cap, signal processing, subsequent classification using the Neu-roPype program, and transmission of the classification data generated to a virtual environment (a video game), using code that included the translation of this data received as control commands for moving a character on an infinitely generating platform. The tests were performed with healthy subjects who engaged in trials of 1-5 h, and the maximum survival time in the game recorded by any participant was 65 s. The processing included stages where methods such as CSP, a FIR filter, and LDA were applied, the results of which were satisfactory. Our tests showed that there is the possibility of testing and likely implementing the prototype for people who require motor rehabilitation treatment, in addition to the fact that a video game as part of subject feedback can be highly motivating as mentioned in [92,93]. Among the possible extensions of the work is the improvement of the classification speed of the prototype in order to use it in applications that require higher speeds, as well as changes to or variations of the video game according to technical patterns within the area of game development. Finally, another possibility of extension for this work is to be able to use the prototype with people who require rehabilitation treatment for their upper limbs and to be able to utilize virtual environments such as video games for said treatment. Funding This study was funded by Universidad Pedagógica y Tecnológica de Colombia (project number SGI 3303) and the APC was funded by the same institution.

Data availability
The authors confirm that the data supporting the findings are available from the corresponding author, upon reasonable request.

Conflicts of ınterest
The authors declare no conflict of interest.
Informed consent Informed consent was obtained from all subjects involved in the study.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.