Introduction

Robot-assisted training has received increasing attention in the field of rehabilitation because of its ability to improve or restore muscle weakness in patients suffering from neurological injuries. Compared to human therapists, robots can provide faster, easier-to-perform and adjustable repetitive training, which is essential for patients to regain muscle function [5]. Many robotic devices such as robotic hand prostheses [1, 2], hand exoskeletons [3, 32], and robotic gloves [7, 25], have been developed and used for hand function rehabilitation/reconstruction of patients with hand disability or weak grasping strength. Soft robotic gloves are more user-friendly than rigid hand exoskeletons for advantages such as simple mechanical design and light in weight, close and safe physical human-robot interaction. Various mechanisms and actuations of robotic gloves have been developed [6, 12, 23, 30, 38, 40]. In addition to the aforementioned mechanisms, numerous research efforts have been dedicated to ensuring control system stability in complex nonlinear systems within the field of robotics, including a self-triggered model predictive control [18], point-wise controller based on point measurement [28], point-to-point iterative learning control strategy [42] and so on.

Assistive control of robotic gloves based on surface Electromyography (sEMG) has been investigated in some works. An assistive control method using EMG is developed to support activities of daily living (ADL) [26]. In particular, the EMG flexor/extensor signals together with some rules are used to control hand closing, hand opening, and hand ‘hold’ functions of the glove. A similar EMG control approach can be found in [12, 24]. In [8], a proportional control was proposed instead of on-off control for the grasping force. Thus the grasping force is tuned smoothly during the grasping process, which is preferable for grasping objects with different stiffness. A method to combine EMG sensors and force sensors to decode human intention and different grasp types was reported in [13]. In [27], EMG is used to detect hand closing/opening through pattern classification for assistive control of a hand orthosis. In addition, two multimodal control strategies are adopted where the bend sensors and pressure sensors are also employed for control, in case EMG signals cannot be continuously maintained for hand closing. Moreover, human object contact estimation was estimated by EMG signals in [11, 36]. The estimated grasping pressure force was used for bilateral rehabilitation control of a hand exoskeleton in [11].

Researchers utilized various methods to finish the task of estimating human grasping force using EMG signals, which include artificial neural network (ANN) [10, 15], radio-frequency identification (RFID) [39], multi-modal information [27], and contact-based approaches [23]. In recent times, the application of deep learning-based techniques has gained significant popularity in EMG recognition. Numerous studies have implemented deep neural networks for EMG processing. Phinyomark et al. [29] address the issue of EMG processing in the context of the rapid growth of big data and deep learning. Faust et al. [14] provides an overview of deep learning in healthcare applications that involve biomedical signals, such as EMG, EEG (Electroencephalogram), ECG (electrocardiogram), and EOG (electrooculogram). Mahmud et al. [19] summarize the use of deep learning techniques, reinforcement learning methods, and deep reinforcement learning in the biological field with biomedical signals, including EEG, ECG, and EMG.

While EMG-based methods may function effectively for robots with a limited number of degrees of freedom, interpreting these signals poses several challenges, making them most efficient when utilized for preprogrammed motion triggering [5]. Considering the non-invasiveness, ease of use, and probability of EMG devices, this study estimates human object contact through the detection of EMG signals.

On the other hand, from the view of assistive control strategies, the whole rehabilitation process with assistive technology can be generally divided by three stages: passive rehabilitation, active rehabilitation, and resistive rehabilitation [33]. The passive rehabilitation is mostly adopted in the early stage of rehabilitation. For passive rehabilitation, the human motion is mobilized by rehabilitation devices. As a consequence, the patients are not actively involved in the rehabilitation training. To help the patient retrieve muscle strength and accelerate the rehabilitation process, the active rehabilitation strategy is proposed that the patient is required to produce some effort in the training.

The assist-as-needed (AAN) control is an assistive strategy that covers both passive rehabilitation and active rehabilitation. AAN control approaches aim to minimize the intervention of robotic devices in rehabilitation therapy when users are able to perform the task. The human effort for training tasks is measured or estimated by different sensors. If it is detected that the human effort is not enough to drive a training task, an assistive robot will provide more assistance to complete the training. Otherwise, if it is detected that human effort is increasing, the assistive robot will reduce assistance. Even if the human effort is less than a minimum threshold, the assistive robot will take charge and help the patients to finish the task. The AAN control strategy has been adopted in different rehabilitation robots such as upper-limb rehabilitation robots [20, 35], lower-limb rehabilitation robots [4, 34], and hand rehabilitation robots [31]. However, there is not any AAN control based on human object contact estimation for hand rehabilitation investigated yet.

In this paper, a new control method was developed to control a soft glove to provide assistance as needed in grasping. The combination of human object contact estimation and AAN control strategy for hand rehabilitation is investigated in this work. Experiment results on the soft robotic glove are provided to verify the effectiveness of the AAN strategy. In contrast to previous research, this study’s contributions can be summarized as follows:

  1. 1.

    The controller designed in this study for human object contact estimation is unique in that it eliminates the need for a fully drivable system or known robot dynamics, distinguishing it from impedance and closed-loop controllers [1,2,3], among others. This design feature greatly simplifies the mechanical and controller design process.

  2. 2.

    The proposed schemes prioritize user autonomy in task execution, intervening less when the user is capable. And for users with insufficient strength or limited mobility, these schemes minimize the potential risk of harm when providing power assistance, thereby ensuring the safety of this approach. The proposed schemes either do not require a dynamic model of the system or are less affected by incomplete information regarding the dynamics.

The rest of this paper is organized as follows. The soft robotic glove system is introduced in “System architecture” section. The main algorithm including human object contact estimation and AAN control is provided in “Main assistive control algorithm” section. Experiment results are illustrated in “Experiments” section. Finally, some conclusion remarks and future research directions are given in “Conclusions” section.

System architecture

Robotic glove system

An overview of the robotic glove system is shown in Fig. 1 and the algorithm structure is shown in Fig. 2. The whole system is comprised of the following devices:

  1. 1.

    A soft robotic glove (BIOSERVO) [9]. It has three fingers with a force-sensitive resistor (FSR) attached to the pad of each fingertip, and a control box. Each finger can be controlled separately via cables connected to the control box. Each finger is driven by cables passing through the two lateral sides of each finger. The FSR sensors can be used to detect the contact force with an object during grasping tasks. The measuring range of an FSR sensor is from 0 to 20 N. The robotic glove control box is connected to a computer via serial communication. The FSR data can be sent from the glove control box to laptop at the rate of 2 Hz.

Originally, the robotic glove was controlled proportionally with the contact force acquired from the FSR sensors. The FSR-based controller is very direct and simple in implementation. However, it relies on a good contact between the FSR sensor and the object that it cannot be used for objects with different dimensions.

Fig. 1
figure 1

Robotic glove system

Fig. 2
figure 2

System control architecture

  1. 2.

    An EMG armband (MYO armband) [21]. There are 8 channels of surface EMG (sEMG) signals offered by the EMG armband. With the armband, sEMG electrodes can be easily attached to the forearm. In addition, another advantage of the EMG armband is that it transmits data to computer via wireless communication (Bluetooth) which is preferable for a portable system. The sampling frequency for EMG data collection is 200 Hz. Specifically, the placement of the EMG armband on the human forearm follows the rule provided in [22].

  2. 3.

    A laptop computer. A computer machine interface is designed with MATLAB on the laptop. The high-level grasping force estimation and AAN control are executed by the laptop computer. It generates the desired assistive force for each robotic glove finger to be sent to the glove control box.

EMG signal processing

The raw EMG signals are very noisy that it cannot be directly used. Various EMG features have been discussed and selected for myoelectric control so far, such as mean absolute value (MAV), root mean square (RMS), and frequency-domain features. As shown in [17] that RMS performs better results of classification of hand gestures compared with MAV. Therefore, the RMS signals of EMG are generated, with the window size selected as 100 samples. The processed EMG signals are sent as input signals of a neural network to estimate the human-object contact. An EMG data processing result is shown in Fig. 3. It can be seen that the processed EMG signal is much smoother than the original one.

Main assistive control algorithm

The control architecture of the soft glove is shown in Fig. 2. The human-object contact is estimated by neural networks. Assistive force of the robotic glove is computed using the AAN control strategy. In the last step, the computed assistive force is sent to the glove control box.

Human object contact estimation

Refer to Fig. 2, EMG signals are used as input of the neural network. FSR sensors on the pad of the glove fingertips are used to measure the grasping force directly. The collected grasping force data is used as the reference output of the neural network training.

In human-object contact estimation, an object is put on the desk with a fixed location, and a human subject is asked to wear the robotic glove and press (not lift) the object from two opposite directions using different muscle strength, see Fig. 4a for the whole human object contact estimation system and Fig. 4b for the object used for the estimation. Only the thumb, the index finger, and the middle finger are allowed to touch the object. The total human pressure force \({\vec {F}}_{FSR}\) can be computed by:

$$\begin{aligned} {{\vec {F}}_{FSR}} = {{\vec {F}}_{FSR1}} + {{\vec {F}}_{FSR2}} + {{\vec {F}}_{FSR3}} \end{aligned}$$
(1)

where \({\vec {F}}_{FSR1}\), \({\vec {F}}_{FSR2}\), and \({\vec {F}}_{FSR3}\) are contact forces of the thumb, the index finger, and the middle finger, respectively, measured by FSRs. Here, as the human fingers touch the steel cylinder from two opposite directions, in this situation the magnitude of \({\vec {F}}_{FSR}\) can be approximately regarded as:

$$\begin{aligned} {F_{FSR}} \approx {F_{FSR1}} + {F_{FSR2}} + {F_{FSR3}} \end{aligned}$$
(2)

where \(F_{FSR1}\), \(F_{FSR2}\), and \(F_{FSR3}\) are the magnitude of \({\vec {F}}_{FSR1}\), \({\vec {F}}_{FSR2}\), and \({\vec {F}}_{FSR3}\), respectively.

Fig. 3
figure 3

EMG signal processing result

Fig. 4
figure 4

Different grasping objects. (a) The whole system. The steel cylinder (b) is used for human object contact estimation, the wood (c) and the screwdriver (d) are used to verify the AAN control strategy

Fig. 5
figure 5

Neural network for human object contact estimation

Fig. 6
figure 6

Online human-object contact estimation results of a healthy subject. In session 1, the contact object is a wood block (see Fig. 4c), and in Session 2, the contact object is a screwdriver (see Fig. 4d)

In particular, it is known that machine learning is highly dependent on the accuracy of the training data. To get precise human-object contact data for the neural network training, we use a steel cylinder (see Fig. 4a, b) as the object for the training of the neural network, due to its stable and rigid structure and good contact with the human fingertips. This limitation on the object is only required by the neural networking training, as the goal is to get accurate force data for training of the neural network. Such a limitation does not exist in the AAN control.

As the sampling rates of the EMG data and the FSR data are different, to synchronize the data acquisition, the sampling rate for the whole data collection system is selected as 2 Hz.

The human object contact estimation by the neural network is shown in Fig. 5. The ANN has eight input nodes corresponding to the eight-channel EMG signals, and one output node representing the estimated force. The contribution and weight of each EMG channel to force estimation are implicitly represented in the trained network weights of the ANN, eliminating the need for a priori weight assignment. There are two hidden layers and each hidden layer contains twenty nodes. The back-propagation technique is used. In addition, the ReLU function is adopted as the activation function to deal with the vanishing gradient problem for the multi-layer neural network. Moreover, the output nodes employ the softmax activation function.

The ANN in this paper was designed based on the Radial Basis Function Neural Network (RBFNN) to realise the estimation process, because the RBFNN could approximate any non-linear linear function in reasonable precision [37],and it has been proved to be superior in approximating continuous functions [16]. There is one hidden layer contains N nodes. In each node, the Gaussian kernel function is used as the activation function. Considering the ith node in hidden layer, \(P_i\) and \(\delta _i\) are the parameters of this node, and the output of this node \(a_i\) is calculated by:

$$\begin{aligned} a_i = e^{-\frac{\left\| \mathrm{{E}-\mathrm {P_i}}\right\| ^2}{{\delta _i}^2}}. \end{aligned}$$
(3)

A linear function was selected to generate outputs from hidden layer to the final output \(F_{EMG}\) as follows:

$$\begin{aligned} F_{EMG} = \sum _{i=1}^{N}{w_i a_i}+b, \end{aligned}$$
(4)

where adaptive parameter \(w_i\) and b represent the weight and bias, which need to be determined with data training. The back-propagation technique is used to train these parameters. In the experiment, \(N=60\).

Assist-as-needed control

As shown in Fig. 3, apart from the estimated human-object contact, the reference grasping force for grasping a specific object should be known as well. The reference grasping force is defined as the grasping force which is required to successfully grasp a specific object. The acquisition of reference grasping force can be done by a healthy subject before it is applied for a patient, in case that the patient may not be able to generate enough muscle strength to grasp the object which results in failed acquisition of the reference grasping force.

Fig. 7
figure 7

Objects for the comparison experiment. 1 Soft bottle, 2 thin book, 3 bottle and 4 thick box

It is noted that the reference grasping force can be either measured by the FSR sensors directly, or by indirect estimation via the EMG sensor in case that the contact between the FSR sensor and the object is not perfect.

The assistive force can be readily computed by the following equation:

$$\begin{aligned} {F_{Assitive}} = {F_{Ref}} - {F_{EMG}} \end{aligned}$$
(5)

where \(F_{Assitive}\), \(F_{Ref}\), and \(F_{EMG}\) are the total assistive grasping force, the reference grasping force, and the estimated human-object contact by EMG, respectively.

Table 1 Object descriptions
Fig. 8
figure 8

Estimation results in comparison experiment

From the total assistive force \(F_{Assitive}\), the assistive force for each robotic glove finger is further computed. Based on the findings in [41] that force distribution among fingers could be calculated by Eqs. (6) and (7).

If palmar grasp is implemented, then

$$\begin{aligned} \begin{array}{l} {F_{Thumb}} = {F_{Assitive}}/2 \\ {F_{Index}} = {F_{Middle}} = {F_{Assitive}}/4 \end{array} \end{aligned}$$
(6)

If pincer grasp is implemented, then

$$\begin{aligned} \begin{array}{l} {F_{Thumb}} = {F_{Index}} = {F_{Assitive}}/2 \\ {F_{Middle}} = 0 \end{array} \end{aligned}$$
(7)

where \(F_{Thumb}\), \(F_{Index}\), and \(F_{Middle}\) are the computed assistive force for the thumb, the index finger, and the middle finger, respectively. Finally, these values are sent to the glove control box to actuate the assistive force control of the robotic glove. In this work, we only consider palmar grasp as it is usually used for grasping heavy objects which need more grasping force than light-weight objects.

Table 2 MSE results in grasping 4 different kinds of objects with proposed RBF and compared BD method
Fig. 9
figure 9

AAN control performance of a healthy subject

Experiments

Human-object contact estimation performance

Human-object contact estimation was implemented with a neural network and tested using the steel cylinder as the grasping object (see Fig. 4b). The data collection lasts for 500 s for each training of the neural network. 1000 input/output data are collected from a healthy subject during the data collection. The number of data for neural network training is 900. The additional 100 input/output data which are from the same data collection session are only used for the initial evaluation of regression, to check if the neural network is well trained. Finally, an inter-session verification method is adopted for further verification. With the previously trained neural network, the performance of human-object contact estimation can be verified online.

The online grasping force estimation results are shown in Fig. 6. Results from two experiment sessions are provided. It can be seen that the estimated force by EMG is generally close to the measured pressure force by FSR sensors. Although there are some errors, it does not affect the overall performance of estimation by EMG. As a result, the human-object contact estimation can be used for AAN control.

In addition, it should be noted that due to the low sampling rate at 2 Hz, it takes a long time to collect enough data for the training of the neural network and during which the human subject is required to press the object using different strengths. It is without doubt that more training data will lead to better grasping force estimation performance.

Furthermore, a two-hidden-layer Back Propagation Neural Network (BPNN) was set to estimate the human contact with eight-channel EMG signals as a comparison. The first hidden layer has 50 nodes and the second one has 10 nodes, then the output layer generate the one-dimensional output as the value of the human contact estimation. Four objects were adopted in this comparison experiment, as displayed in Fig. 7 and Table 1 lists the key properties of these selected objects.

Figure 8 presents the estimations results from EMG by BPNN and RBFNN. It could seen from Fig. 8 that at most time RBFNN is closer to the measured pressure force by FSR sensors than BPNN. We calculate the estimation error denoted as mean square error (MSE) of these two neural networks, and the results are presented in Table 2. It could be seen from Table 2 that RBFNN is more precise than BPNN. Therefore, this comparison experiment is able to validate the effectiveness of the adopted ANN in this work. At the same time, it reflects the superiority of our method in estimation accuracy compared to existing BPNN, as well as the applicability of our method on multiple objects.

Assist-as-needed control performance

Based on the human-object contact estimation, the AAN control was executed by applying the AAN control algorithm in Eq. (5). Two objects are used in the experiments: a wood block (see Fig. 4c), and a screwdriver (see Fig. 4d). By applying the method shown in “Main assistive control algorithm” section, the reference force for grasping the wood and the screw driver are measured as 14.18 N and 10.6 N, respectively.

The experiment results of AAN are shown in Fig. 9. From Fig. 9a, it is seen that the measured grasping force \(F_{FSR}\) by FSRs is close to the estimated grasping force \(F_{EMG}\) by EMG signals due to the good contact between the wood block and the human fingertips. However, in Fig. 9b, it can be seen that the measured force \(F_{FSR}\) by FSRs is very small that, the human-object contact acquisition is apparently unsuccessful using the FSRs. The reason is that it is difficult to have a good contact between the screwdriver and the human fingertips as the former one is too small (see Fig. 4d). However, we can still use the EMG signals to estimate human strength. As a result, human strength estimation by EMG is more reliable in this situation.

From Fig. 9, it can also be seen that when the estimated human strength is not enough to hold an object, the robotic glove will provide more assistive force. On the other hand, when a human subject is willing to use more strength, the intention will be detected by the EMG sensor. The robotic glove will decrease accordingly the assistive force.

Conclusions

In this work, we propose an AAN control strategy for a soft robotic glove. The human-object contact is estimated by an EMG sensor instead of an FSR sensor through machine learning with a neural network, which could estimate the contact indirectly and accurately without a dynamic model of the system. Based on the human object contact estimation, the AAN control strategy is applied to provide assistance by the soft robotic glove if it is detected that the human strength is not strong enough due to muscle fatigue. Therefore, the proposed system supports the patients to participate to the rehabilitation exercise actively.

It should be noted that the glove closing/opening control is not considered in this work. As a result, the robotic glove is always closed throughout the whole rehabilitation exercise such that it cannot execute the typical training task “grasp and place” without additional closing/opening control command. The integration of the human intention for hand closing/opening and the recognition of palmar/pincer grasp will be investigated in the future to complete this AAN control strategy.