Keywords

1 Introduction

There have been many attempts to identify the movement of the human body in a virtual way by monitoring the behaviour of the extremities for haptic interfaces [7], teleoperation tasks [1] and assistive and rehabilitation devices [4, 16]. Robots increase the number of repetitions performed in a rehabilitation session, thus improving patient morale and motivation [24]. In recent years, rehabilitation devices use sEMG as main source of feedback [15] for control [8].

Several sEMG techniques are used for the identification and classification of movements [3], some of the most relevant being the Detrended Fluctuation Analysis (DFA) for the identification of low-level muscle activation [19], the sEMG signal decomposition into Motor Unit Action Potential Trains (MUAPTs) [18], the Tunable-Q factor Wavelet Transform (TQWT) based algorithm proposed for the classification of physical actions [2], and Convolutional Neural Network (CNN), recently confirmed as as a powerful tool for the classification of operator movements [25]. These methods, combined with appropriate signal filtering techniques [5], are useful for estimating the movement of the human body. Data fusion [13] using sEMG sensors is widely used in rehabilitation. Movement recognition algorithms generally combine sEMG signals with the Inertial Measurement Unit (IMU) [11], or with force sensors [10]. There are particular cases where flexion sensors [22] are used to avoid the accumulated error on the measurement.

This paper focuses on the development of a soft-compressive jacket with a network of soft-flexion sensors, merged with sEMG sensors attached to it for movement detection of the upper limbs. This device allows the user to quickly start estimating shoulder orientation without the need for prior calibration.

2 Materials and Methods

Using a configuration of seven one-axis sensors, as in previous work [21], it is possible to obtain 95% of the variance of the principal components for the shoulder gestures. The configuration proposed in this paper places only an array of four flexion sensors in the intermediate positions due to the fact that they provide flexion measurements in two axes.

The array of four flexion sensors Sx (being ‘x’ the sensor number) was placed over a compression jacket (see Fig. 1). The capacitive flexion sensors are the Two Axis Sensor of Bendlabs [12] and its operation is explained and well detailed in [20]. The sensors have been attached to a compression jacket by sewing two small rigid pieces which hold and guide the sensor in the arm movement direction and to neglect properties such as wrinkles and stretching. The first support (FxA) (see Fig. 1b) holds the sensor in a fixed position while the second one (FxB) allows it to slide inside it and guides it over the arm (see Fig. 1c).

Fig. 1.
figure 1

Soft Sensor Device. sEMG location (1a): Trapezius Descendens (CH3), Deltoideus Medius (CH2) and Pectoralis major (CH1). Markers location: one over the shoulder Acromion bone, two on the arm (1b) and two vertically over the base (1c).

The sensor arrangement allows shoulder movement to be measured in a six-degree-of-freedom (DoF) work-space where arm rotation around its longitudinal axis is not included, this measurement is converted into two angle XY and YZ given by the conversion of the position of the ground truth. sEMG sensors [23], are allocated following the recommendations of Surface Electromyography for the Non-Invasive Assessment of Muscles (SENIAM) [9]. The electrodes are placed on the user (as shown on Fig. 1); then, the user puts on the compression jacket over the electrodes (not shown on Fig. 1b nor Fig. 1c). The design of this device allows the deformation and stretching of the fabric to be disregarded due to the small rigid pieces, in addition to not limiting the user’s mobility on daily tasks.

The gestures performed were simplified to cover the natural range of arm movement [14] for daily tasks, and were assigned a number for further identification: 1. Abduction/Adduction of the shoulder until the arm reaches 120\(^{\circ }\) inclination; 2. Flexion/Extension of the shoulder from 0\(^{\circ }\) to 120\(^{\circ }\); 3. Horizontal adduction/displacement of the arm at 90\(^{\circ }\) flexion, hand crosses sagittal plane till arm reaches a 30\(^{\circ }\) displacement; 4. Closing/Swing drill movement of the arm inwards from 0\(^{\circ }\) to 120\(^{\circ }\); and, 5. Opening/Swing drill movement, starting with a flexion of 120\(^{\circ }\) to 0\(^{\circ }\). The method developed in this study was evaluated in four healthy subjects; tests were spread over three different days to avoid exhaustion of the muscles. Each subject performed a total of five repetitions of each of the five gestures, continuously and without interruptions. Participants’ ages ranged from 24 to 30 years old. All the gestures made by the subjects were performed in a chair facing a screen.

2.1 Data Acquisition

To start data collection, the sEMG sensors and the flex sensor compression jacket are placed on the subject’s right arm. Then, OptiTrack [17] markers are located as shown in Fig. 1, in order to obtain the real pose of the subject’s arm.

Fig. 2.
figure 2

The EMG signals and angles of the user’s movements (first box on the left) were acquired using visual feedback generated by the interface on the Jetson Nano, which also stores this data. The OptiTrack system stores the position of the markers. In the end, both files are merged into one.

Both flexion and sEMG sensors are connected to a custom acquisition board based on the LAUNCHXL-F28379D development board. On the one hand, the sEMG sensors provide an amplified, rectified and integrated analogical signal (AKA the EMG’s envelope), which is obtained by the micro-controller at a rate of 1kHz. On the other hand, the flexion sensors communicate with the micro-controller via I2C protocol at a frequency of 200 Hz.

A graphical user interface has been developed to guide the speed and kind of movement of the participants while performing the gestures and to log all obtained data. This software has been implemented on an NVIDIA Jetson Nano. This device communicates with the acquisition board via SPI at 500 Hz and stores the data contained in every received message along with the timestamp and the gesture that is being performed in a plain text document. Simultaneously to the start of the sensors’ data acquisition, Optitrack data acquiring is initiated at 240fps. In the end, a file with the positions of the markers belonging to the OptiTrack system is exported. The interaction of all elements is shown in Fig. 2.

2.2 Data Processing

The sessions for each subject are condensed into a single file. Given that the Optitrack system captures are made at 240 fps, an interpolation is performed to reach a frequency of 500 Hz in the data. The interpolation method consists on taking the Optitrack file which is the shortest and matching it with the number of samples with the Jetson Nano file by adding with a quadratic splines method the missing data. The signal from the sEMG sensors is filtered offline. To Smooth this data a Savitzky-Golay smoothing local regression using weighted linear least squares and a 2nd degree polynomial filter is used with an span of 0.7% of the total number of data points. The angle between the markers of the Optitrack system is obtained by calculating the angle generated between the line generated from the arm markers on the shoulder Acromion bone, and the vertical from the markers of the backrest of the rehabilitation system.

A Time series Network [6] with Levenberg-Marquardt algorithm is used to calculate the orientation of the arm using the angles given by the flexion sensors as input data, and the OptiTrack markers reference as target. With the use of Matlab’s Machine Learning and Deep Learning Toolbox, it was possible to estimate that the best parameters for this task were 10 hidden neurons and consecutive samples. For the training of the neural network, 70% of the data was used for training, 15% for validation and 15% for testing, in order to find the lower MSE and the best Regression value (R).

For the classification of the gestures, dummy variables of the numbers assigned to each movement (as listed on Sect. 2) are used to recognise each pattern from the fusion of the signals of the filtered sEMG sensors and the data from the flexion sensors; a two-layer feed-forward network, with sigmoid hidden and softmax output neurons tool was used as pattern recognition. Ten, fifteen, twenty and twenty five hidden neurons where evaluated by testing the computation time and the number of iterations in order to get the best cross-entropy value; fifteen hidden neurons where the most appropriate for this task. The networks were trained with scaled conjugate gradient back-propagation (trainscg) using again, 70% of data for training, 15% for validation and 15% for testing.

3 Results and Discussion

For the estimation of the orientation, two Time series Network configurations were designed, one for dummy variables and one for the gestures as numbered in Sect. 2, resulting in a MSE of \(1.49E-05\) with a Regression value of \(9.99E-01\) and a MSE of \(1.50E-04\) and R of \(9.99E-01\), respectively. It can be concluded that the selection of either of the two target variables does not have a significant influence on the results, given that both have a regression value (R) of 0.99%, and the difference on the MSE is minimum.

In order to evaluate the trained networks for both orientation and gesture, a new single session is performed by one of the original subjects. It is observed that the proposed device is valid to find the orientation of an arm when the network is calibrated with a sub-millimeter system. This can be noted in the box on Fig. 3, corresponding to one portion of the whole closing drill movement; the data comes from flexion sensors only and is processed offline with the trained network and later compared with the ground truth data of that new session. It can be seen that the estimation is close to the calibration system, with an MSE of \(1.32E-05\).

Fig. 3.
figure 3

Signal of the angle generated by the arm with the data of the ground truth together with the signal estimated by the network.

For gesture classification, the condensed data from the flex sensors is taken along with the sEMG data in order to train a pattern recognition neural network. To verify that the fusion of the data is feasible, three different networks are trained, one network only with the EMG data, another only with the flex sensors, and the third with the two of them. The resulting performance values are displayed in Table 1.

Table 1. Performance in percentage of the classification for sEMG (50.6%) and Soft-Flexion sensors (89.8%) and combined sEMG with Flexion (95.4%).

In order to compare the models, F-score is used. Given by: \(Fscore = (2*Recall*Precision)/(Recall+Precision)\), where \(Recall=TP/(TP + \sum FN)\) (being TP the true positive value and FN the false negative values); and \(Precision=TP/(TP + \sum FP)\), (being FP the false positive values for each of the Confusion Matrices).

The network using only the sEMGs shows poor results for this application, whilst the Flexion network and the combination of sEMG and Flexion sensors both have promising results. It could be said that flexion sensors are sufficient for the classification of movements for a certain type of application that does not require great sensitivity, while the fusion of both sensors denotes a great performance with minimum error. The overall performance of the network with the fusion of the two type of sensors is 95.4%; the best results were obtained by the Abduction gesture with 98% of precision, which could be a result of it being the only gesture generated in a different space and different muscle activation with respect to the other four gestures. On the other hand, Horizontal Adduction (93.1%) shares estimations with the Closing Drill and Opening Drill gestures; this can be given to the fact that they share movement space on certain spots.

Table 2. Trained network estimation: Ranking results for each gesture performed

Using the data acquired for the new session and tested offline, it can be noted that the gestures, that coincide in the Flexion movement space such as the drilling gestures, cause an error in the estimation of the pattern. Table 2 depicts the response to the estimation in percentage for each gesture made during the new data collection. Horizontal Adduction presents the least exact estimation, contrary to Abduction, which presents a minor magnitude of error which coincides in a way with the training performance for the two sensors network.

3.1 Discussion

Since the objective of this study is to control flexible exoskeletons used in rehabilitation and assistive devices for the upper limbs, the feedback of the shoulder position and the gesture performed are extremely important for Control. This device, in its prototype mode, was created in a single compression shirt size, always being able to adapt in different sizes. This document does not present a study of the comparison of undefined gestures. It is estimated that the developed algorithm could be functional for new gestures as long as the data is processed and is not within the range of movement of the other gestures.

4 Conclusions

In this document, the signals of three electromyography sensors and an array of four flexion sensors are used to compose a flexible device to estimate five predefined gestures and the orientation of the shoulder. Two different algorithms are used to perform each characteristic, one for the identification of patterns to estimate the gesture being performed, and a recurrent neural network to estimate the orientation of the arm. The results show that the device consisting of an array of four flexion sensors is capable of estimating the gestures with a performance of 89.8%, with results showing improvement by adding the sEMG signal to the algorithm with a performance of 95.4%, there being an area of improvement in this last characteristic, such as filtering the sEMG signal online. Depending on the desired performance for the application, different arrangements can be used.