Abstract
Position-force control is challenging for redundant manipulators, especially for the ones considering both joint physical limitations and model uncertainties. In this chapter, we considered adaptive motion-force control of redundant manipulators with uncertainties of the interaction model and physical parameters. The whole control problem is formulated as a QP equation with a set of equality and inequality constraints, where based on admittance control strategy, the desired motion-force task is combined with the kinematic property of redundant manipulators, corresponding to an equality constraint in the formed QP equation. Moreover, the uncertainties of both system model and physical parameters are also considered, together with the complicated joint physical structure constraints, formulating as a set of inequality constraints. Then an adaptive recurrent neural network is designed to solve the QP problem online. This control scheme generalizes recurrent neural network based kinematic control of manipulators to that of position-force control, which opens a new avenue to shift position-force control of manipulators from pure control perspective to cross design with both convergence and optimality consideration. Numerical results on a 7-DOF manipulator LBR iiwa and comparisons with existing methods show the validity of the proposed control method.
You have full access to this open access chapter, Download chapter PDF
3.1 Introduction
A manipulator is called redundant if its DOFs are greater than those required to complete a task. The redundant DOFs enable the robot to maintain the position and direction of the end actuator to complete a given task and adjust its joint configuration to complete a secondary task. Take advantage of this feature, typical manipulator systems such as collaborative robots, space robotic arms, dexterous hands [1, 2] are all designed as redundant ones.
In Chaps. 1 and 2, we mainly focus on kinematic problems, in which we assume the end-effector of a manipulator could move freely in cartesian space. In fact, in industrial applications, the interaction between robot and external environment must be considered, for example, in tasks such as grinding, human-robot interaction, not only the high-precision motion control to a given trajectory should be guaranteed, but also the contact force exerted by the external environment should be guaranteed.
There are several approaches to achieve force control for robot manipulators. By introducing series elastic actuators as flexible units, force control can be realized by adjusting the compliance of joint angles. In [3], in order to overcome the discontinuous friction and complexity problem of traditional back-stepping based methods, a modified command-filter is introduced, and then an adaptive back-stepping controller is designed. Experimental results show the effectiveness of the proposed method. Other control schemes realize force in cartesian space. The most widely used method is called impedance control [4], where the robot and environment are regarded as impedance and admittance, respectively. The interaction model (which is also called impedance model) can be a spring-mass-damper system, spring-damper system, etc. Besides, a series of hybrid position-force controllers are designed in [5, 6], which consist of two independent loops, namely position loop, and force loop. By designing control schemes separately, the final control efforts can be formulated as the sum of the output of the independent loops. Similar research can be found in [7,8,9].
In industrial applications, the accurate value of the impedance model can hardly be obtained. for example, the stiffness parameter may be sensitive to environmental factors such as temperature, humidity, etc. Besides, the uncertainties in physical parameters would also affect control performance. In order to due with the uncertainties in the interaction model, in [10], an adaptive impedance controller is designed, in which a neural network is used to learn the nonlinear dynamics of the interaction part. In [11], by considering the influence of unknown dynamics of the external environment, and a radial basis function based controller is proposed, in which an objective function is used to regulate the torque and an adaptive admittance technique is used to minimize path tracking errors. In [12], a human-like-learning based controller is designed for interaction with environmental uncertainties. It is proved that the controller is capable of handling unstable situations such as tool switching, and can finally achieve an expected stability margin. Besides, contact force sensors are not required. Using the approximation ability of artificial neural networks, some intelligent controllers are reported in [13,14,15,16,17]. As to physical uncertainties, in [18], a fixed point controller is proposed based on robust adaptive control theory, the controller also ensures the bounded-ness control torque. Cheng et al. propose a unit quaternion based controller based on neural networks [19], which shows good performance in eliminating singularities, and semi-global stability is proved by theoretical results. In [20], a Jacobian adaptation method based on zeroing dynamics is proposed, in which the Jacobian matrix is updated according to the information of desired and actual accelerations. Other feasible adaptive strategies are reported in [21,22,23,24], in which the Jacobian is estimated by updating physical parameters online. As to physical constraints, in [42], an adaptive neural network control scheme is designed for systems with non-symmetric input dead-zone, as well as output constraints and model uncertainties. The output constraints are guaranteed by the barrier Lyapunov function. In [43], a boundary adaptive robust controller is established for flexible rise systems, in which an auxiliary system is introduced to suppress vibrational offset, and an estimator is constructed to observe to upper-bound of disturbances. The controller achieves the global convergence of control errors. Although the above-mentioned controllers could handle uncertainties in the interaction model or physical parameters, few studies have considered both uncertainties at the same time. More importantly, those controllers rarely consider the secondary task, let alone the redundancy resolution problem. Besides, the boundary of joint states is ignored, which is essential in protecting the robot.
In order to accomplish the secondary task in the reliable physical range, a kinematic control method of redundant manipulator based on QP is proposed [25,26,27,28]. The objective function is based on the secondary task, and the constraints describe the basic properties and physical constraints of the system [29]. Because of the high efficiency of parallel computing, the recurrent neural network is often used to solve the redundancy decomposition problem based on QP. In recent years, research shows that RNN based controller has good performance in motion control of redundant manipulator [30]. In [31], in order to achieve task space tracking, the joint velocity command is designed to ensure the boundary of joint angle, velocity, and acceleration. In the paper [32], by maximizing the indirectness of its time derivative, an operational optimization scheme is proposed. Numerical experiments show that the average increase in this method is 40%. In [33], different levels of redundancy resolution are discussed. Recently, RNN based methods have been extended to control examples of flexible robots, multi-robot systems, and methods such as [34,35,36,37,38,39,40]. However, as far as we know, there is no existing dynamic neural network (including RNN and DNN) protocol for the force control of redundant manipulators. It is necessary to consider not only the trajectory tracking problem of free-motion direction, but also the precise control of the contact force, especially for the system with model uncertainty. In addition, from the literature review, one of the research directions of a dynamic neural network is to extend the protocol of redundant manipulator of motion control task to those aspects that need precise control of tracking ability and contact force.
Based on the above observation results, we propose the first RNN based redundant manipulator position force controller, which considers the uncertainty of the interaction model and physical parameters. In this paper, the ideal case of the known model parameters is discussed, and then an adaptive admittance control scheme based on RNN is established. It ensures the boundary of joint angle and velocity. The effectiveness of the proposed controller is verified by the theoretical derivation and numerical results of LBR iiwa. Before concluding this chapter, the main contributions compared to the existing work are as follows
-
As far as we know, this is the first time to focus on the motion-force control of redundant manipulators with model uncertainties based on the framework of RNNs, comparing to researches on kinematic control, the research is an important extension in the theoretical framework of dynamic neural networks.
-
Different from traditional methods, an optimization-based controller is proposed, while ensuring the stability of the system, physical limitations are also guaranteed. Which is very helpful to improve the security of the system.
-
The controller proposed in this chapter is able to learn uncertain model parameters online, which has better robustness and adaptability to uncertain working conditions.
-
The proposed method is pseudo-inversion-free, which can save computing costs effectively.
3.2 Preliminaries
3.2.1 Problem Formulation
When a robot is controlled to perform a given operational task, the forward-kinematics of a serial manipulator is formulated as
with \(\theta \in \mathbb {R}^n\) being the generalize variable of the robot, and \(x(t)\in \mathbb {R}^m\) being the description of end-effector\('\)s coordinate in task space. Without loss of generality, in this chapter, we assume that all joint are rotational joints. Therefore, \(\theta \) represents the vector of joint angles. In the velocity level, the Jacobian matrix \(J(\theta ,a_k)=\partial {f(\theta (t),a_k)/\partial \theta (t)}\in \mathbb {R}^{m\times n}\) is used to describe the relationship between \(\dot{x}(t)\) and \({\dot{\theta }}\) as
where \(a_k\in \mathbb {R}^l\) is a vector of physical parameters. In terms with (3.2), an important property which will be used in the controller design is given as below
with \(Y(\theta ,{\dot{\theta }})\in \mathbb {R}^{m\times l}\) being the kinematic regressor matrix.
The physical parameters are very essential in describing the robot’s kinematic model, for example, as the most common physical parameters, the length of robot links affects the DH parameters directly, which are fundamental in the controller design. In this chapter, the physical parameters refer to the length of robot links.
Figure 3.1 shows the interaction between the robot and environment, the contact force between the robot and workpiece is required to be precisely controlled. When the fixed contact surface is known, according to the idea of admittance control, the interaction model can be described as a spring-damping system as
where \(K_p\in \mathbb {R}^{3\times 3}\) and \(K_d\in \mathbb {R}^{3\times 3}\) are the corresponding damping and stiffness coefficients, \(x_{\text{ d }}\) is the desired trajectory. If \(K_p\) and \(K_d\) are known, the desired contact force \(F_{\text{ d }}\) can be obtained by designing the reference velocity of the end-effector \(\dot{x}_r\) based on \(\dot{x}_{\text{ d }}\), \(x_{\text{ d }}\), x and \(F_{\text{ d }}\) according to Eq. (3.4).
Remark 3.1
In this chapter, we only consider the contact force on the vertical direction of the contact surface, and friction force is ignored, therefore F is aligned with the normal direction of the surface. When the surface is priorly known, by defining a rotational matrix R between the tool coordinate system and based coordinate system, \(K_p\) and \(K_d\) can be formulated as \(K_p=diag(0,0,k_p)R\), \(K_p=diag(0,0,k_d)R\), respectively. Then \(K_p\) and \(K_d\) can be described as single parameters.
In practical applications, the real values of system parameters such as \(a_k\), \(K_p\) and \(K_d\) are usually unavailable. In terms of \(a_k\), due to machining and installation error, the length of robot\('\)s links may be different from the nominal value, and the robot may hold uncertain tools, which would lead to uncertainties in \(a_k\). As to \(K_p\) and \(K_d\), the real values are even more difficult to obtain. \(K_d\) and \(K_p\) are related to the material and structure of the workpiece, furthermore, those parameters would change in different environmental conditions. Therefore, it is a challenging issue to achieve precise force control in the presence of parameter uncertainties.
For a redundant manipulator, the redundant DOFs can enhance the flexibility of the robot, and this property can be used to achieve a secondary task. In industrial applications, by minimizing the norm of joint speed, the kinematic energy can be optimized. Therefore, in this chapter, the objective function is selected as
In order to save energy consumption in the control process, a smaller value of \(H(\theta )\) is preferred.
Remark 3.2
The objective function \(H(\dot{\theta })\) is a typical function to describe the secondary task in redundant resolution problems, as reported in [21, 25]. In actual implementations, this function can be defined according to the designer’s preferences or actual requirements. In this chapter, we propose a generalized RNN based force control strategy with simultaneous optimization ability. Based on the proposed control strategy, similar controllers can be easily designed by defining different objective functions.
3.2.2 Control Objective and QP Problem Formulation
Before pointing out the control objective, it is noteworthy that the robot must satisfy certain constraints. For example, due to the physical structure, every joint angle \(\theta _i\) must not exceed its limitations i.e., the lower bound \(\theta _i^-\) and upper bound \(\theta _i^+\). Furthermore, limited by actual performance of actuators, joint speed \(\dot{\theta }\) is also restricted, i.e., \(\dot{\theta }^-\le \dot{\theta }\le \dot{\theta }^+\).
When the actual parameters of interaction model are unknown, the control objective of this chapter is to design a force controller with adaptation ability, i.e., to realize accurate force control along the predefined contact surface, in the sense that \(F\rightarrow F_{\text{ d }}\), at the same time, physical constraints of joint angles and velocities must be ensured. According to (3.2), (3.5) and (3.6), the control objective can be described in the view of optimization as
Remark 3.3
So far, we have arrived at a generalized description of admittance control for redundant manipulators in the QP problem. Apparently, there exist parameter uncertainties in \(J(\theta ,a_k)\), \(k_p\) and \(k_d\) as formulated in (3.7b) and (3.7c). In the next chapter, we will solve the problem (3.7) with the aid of RNNs.
3.3 Main Results
In this chapter, an recurrent neural network based adaptive admittance controller is proposed to solve (3.7). An ideal situation where real values of system model are perfectly known is firstly considered, which lays the foundation of the later discussion. Then an adaptive RNN is proposed to achieve force control in the presence of model uncertainties. We also show the stability of the control method.
3.3.1 Nominal Design
In order to explain the proposed adaptive control scheme more clearly, an ideal case in which all parameters are perfectly known is firstly discussed. It can be regarded as a special case of uncertain parameter ones. In this case, both \(K_d\) and \(K_p\) are available, then the real value of \(\dot{x}_r\) is available according to (3.5).
Let \(\omega =\dot{\theta }\) and define a Lagrange function as \(L_1=\omega ^{\text{ T }}\omega +\lambda ^{\text{ T }}(J\omega -F_dK_d^{-1}+K_pK_d^{-1}(x-x_{\text{ d }})-\dot{x}_{\text{ d }})\), with \(\lambda \) being the Lagrange multiplier. Similar to [25], a RNN with provable convergence can be designed as
where \(\varepsilon \) is a positive constant and \(P_{\omega }(\bullet )\) is a projection operator to set \(\varOmega \) as \(P_{\omega }(x)=\text {argmin}_{y\in \varOmega }||y-x||\), and the set \(\varOmega =\{\omega \in \mathbb {R}^n\arrowvert \omega _{\text {min}}^i\le \omega _i\le \omega _{\text {max}}^i\}\) is a convex set describing the modified speed constraints based on escape velocity method [29], and \(\omega _{\text {min}}=\text {max}\{\alpha (\theta _{\text{ min }}-\theta ),\dot{\theta }_{\text {min}}\}\), \(u_{\text {max}}=\text {min}\{\alpha (\theta _{\text{ max }}-\theta ),\dot{\theta }_{\text {max}}\}\), \(\alpha >0\). The stability of system can be readily proved, which is similar in [25], is omitted here.
3.3.2 Adaptive Control Method Based on RNN
Based on the previous description, in this subchapter, by learning the uncertain parameters online, an adaptive RNN is established to solve the force control problem with gravity torque optimization under model uncertainties, the stability of the system is also proved.
3.3.2.1 Adaptive RNN Design
In order to handle the uncertain interaction parameters \(K_p\) and \(K_d\), let \(\hat{K}_p\) and \(\hat{K}_d\) be their estimates. Although \(K_p\) and \(K_d\) are unknown, they are considered to be constant. Then the estimated reference velocity \(\hat{\dot{x}}_r\) can be derived by replacing \(K_p\), \(K_d\) with \(\hat{K}_p\) and \(\hat{K}_d\) respectively according to (3.5)
Let \(\eta =[x-x_{\text{ d }}, \hat{\dot{x}}_r-\dot{x}_{\text{ d }}]^{\text{ T }}\), \(W=[K_p,K_d]^{\text{ T }}\) and \(\hat{W}=[\hat{K}_p,\hat{K}_d]^{\text{ T }}\). Then we can rewrite (3.9) as \(F_{\text{ d }}=\hat{W}^{\text{ T }}\eta \). However, due to the uncertainties in \(K_d\) and \(K_p\), in the actual process, the resulting contact force F using \(\hat{\dot{x}}_r\) directly is \(F=W^{\text{ T }}\eta \), it is noteworthy that the contact force F can be measured by force/torque sensors.
As to the uncertain \(a_k\), the alternative Jacobian matrix \(J(\theta , \hat{a}_k)\) is used by substituting \(a_k\) with its estimate \(\hat{a}_k\), then \(J(\theta ,a_k)\) in equality constraint (3.7b) is replaced by \(J(\theta ,\hat{a}_k)\). Therefore, the force control problem with joint speed optimization considering model uncertainties can be formulated as
To solve (3.10), by defining a Lagrange function as \(L=\omega ^{\text{ T }}\omega +\lambda (J(\theta ,\hat{a}_k)\omega -\dot{x}_r)\), the adaptive RNN is designed as
where \(\varepsilon \), \(\varGamma _1\) and \(\varGamma _2\) are positive gains. Figure 3.2 shows the framework of the proposed adaptive RNN in real-time force control with uncertain parameters. In order to learn the uncertain parameters, the neurons \(\hat{W}\) and \(\hat{a}_k\) update their values based on desired signals \(x_{\text{ d }}\), \(\dot{x}_{\text{ d }}\) and \(F_{\text{ d }}\) and the feedback of x, \(\dot{x}\) and F. The output of the RNN is exactly the joint speed command \(\omega \). By designing proper updating laws, \(\lambda \) and \(\omega \) achieve both stability of the inner loop and the optimization of \(H(\omega )\).
Remark 3.4
In this chapter, we consider the case where \(m=6\), \(n=7\) (where m is the dimension of the cartesian space, and n is the number of joint angles). Since only the contact force on the vertical direction of the surface is considered, the dimension of \(K_d\) and \(K_P\) are all 1(the contact surface if known). As illustrated in Fig. 3.2, the proposed adaptive RNN has a typical one-layer architecture, and the total number of neurons is \(n+l+m+2\).
Remark 3.5
The proposed adaptive RNN (3.11) can be regarded as a generalized form of the nominal RNN (3.8), when \(\hat{W}=W\) and \(\hat{a}_k=0\), it can be obtained that \(\dot{\hat{W}}=0\) and \(\dot{\hat{a}}_k=0\) from (3.3) and (3.9). Then (3.11) is the same as (3.8). However, it is remarkable that adaptive RNN is capable of dealing with model uncertainties. On the other hand, unlike the adaptive RNNs based kinematic control strategies in [21, 22], the proposed controller can achieve both precise control of both position and contact force.
3.3.2.2 Stability Analysis
So far, a theorem about the convergence of the force control problem using proposed adaptive RNN in presence of model uncertainties can be summarised as below
Theorem 3.1
Consider the force control problem for a category of redundant manipulators described in (3.1)–(3.4) with model uncertainties, the state variable \(\omega \) of the proposed adaptive RNN will converge the optimal solution of (3.7), i.e., the force control error will converge to 0, and the norm of joint speed will be optimized simultaneously.
Proof
The proof consists of three steps. Firstly, we will prove that \(\hat{W}\) and \(\hat{a}_k\) could learn the model parameters online, and then the stability in the inner-loop is also analyzed.
Step 1. Define the estimate error of concatenated form of W as \(\tilde{W}=\hat{W}-W\), and \(e_f=F-F_d\) be the error between the contact force and the desired signal. From (3.9), \(e_f\) can be formulated as \(e_f=\hat{W}^{\text{ T }}\eta -W^{\text{ T }}\eta =\tilde{W}^{\text{ T }}\eta \). Consider the Lyapunov function as \(V_1=tr({\tilde{W}}^{\text{ T }}\tilde{W})/2\), which \(tr(\bullet )\) is the trace of a matrix. Calculating the time derivative of \(V_1\) yields
From (3.12) and (3.10) and using LaSalle\('\)s invariance principle [41], we have \(e_f^{\text{ T }}e_f=0\), as \(t\rightarrow \infty \). In other words, the state variable \(\hat{W}\) ensures the convergence of force error \(e_f\) by modifying the end-effector\('\)s reference speed \(\hat{\dot{x}}_r\) according to (3.5).
Step 2. Define the estimate error of \(a_k\) as \(\tilde{a}_k=\hat{a}_k-a_k\), and let \(V_2=\tilde{a}_k^{\text{ T }}\tilde{a}_k/2\). It is notable that during the control process, \(a_k\) can be regarded as constant, then we have \(\dot{\tilde{a}}_k=\dot{\hat{a}}_k\). Actually, using the property described in Eq. (3.3), based on linearization descriptions of \(\dot{\theta }\) and \(a_k\), respectively, the task-space velocity \(\dot{x}\) has two equivalent descriptions, namely \(J(\theta ,a_k)\dot{\theta }\) and \(Y(\theta ,\dot{\theta })a_k\). As a result, the estimated value \(\hat{\dot{x}}\) also has two similar descriptions, depending on the estimated value of kinematic parameter \(\hat{a}_k\). Therefore, the updating law Eq. (3.11) is equivalent to \(-\varGamma _2Y^{\text{ T }}(Y(\theta ,\omega )\hat{a}_k-\dot{x})\). Then it follows from (3.2) and (3.3) that
In light of (3.13), \(\dot{V}_2\) can be rewritten as
Then it can be readily obtained that \(Y(\theta ,\omega )\tilde{a}_k\rightarrow 0\) as \(t\rightarrow \infty \). From (3.3) and and definition of \(\tilde{a}_k\), \(J(\theta ,\hat{a}_k)\omega \) will eventually converge to \(J(\theta ,a_k)\omega \), i.e., the equality constraint (3.10b) will eventually be equivalent to (3.7b).
Step 3. Then we will prove the stability of inner-loop system. According to (3.11), the dynamics of \(\omega \) and \(\lambda \) can be reformulated as
with \(\xi =[\omega ^{\text{ T }},\lambda ^{\text{ T }}]^{\text{ T }}\), \({\bar{\varOmega }}=\{(\omega ,\lambda )|\omega \in \varOmega ,\lambda \in \mathbb {R}^m\}\), and
Define \(\nabla F=\partial F(\xi )/\partial \xi \), we have
with I being the identity matrix. Then it can be readily obtained that \(\nabla F+(\nabla F)^{\text{ T }}\) is positive semi-definite. According to the definition in [32], F is a monotone function of \(\xi \). From the description of (3.11) and (3.15), \(P_{\bar{\varOmega }}\) can be formulated as \(P_{\bar{\varOmega }}=[P_\varOmega ;P_R]\), where \(P_R\in \mathbb {R}^m\) is a projection operator of \(\lambda \) to set R, with the upper and lower bounds being \(\pm \infty \). Therefore, \(P_{\bar{\varOmega }}\) is a projection operator to closed set \({\bar{\varOmega }}\). Based on Lemma 1 in [32], the adaptive RNN (3.11) is stable, and the will be ultimately equivalent to the solution of (3.7). This completes the Proof.\(\Box \)
Remark 3.6
Till now, we have shown the stability of the proposed RNN based adaptive admittance control strategy in the presence of uncertain model parameters. The established adaptive RNN is capable of maintaining the boundedness of system states and avoiding calculating the pseudo-inversion of the Jacobian matrix.
3.4 Illustrative Examples
In this chapter, numerical results on a 7-DOF robot manipulator LBR iiwa are carried out. The physical structure and D-H parameters of iiwa are shown in Fig. 3.3. All we all know, up to 6 DOFs (3 DOFs of position and another 3 DOFs of orientation) are required to fulfill a given task in engineering applications, therefore, the iiwa is a typical redundant manipulator in the force control when considering both the position and orientation of the end-effector. As to the contact force, the contact surface is selected as a plane in the workspace, as shown in Fig. 3.3a. The end-effector is controlled to offer a desired contact force on the contact surface while tracking a given path on it. In the control process, the orientation of the end-effector is wished to keep constant.
This chapter mainly consists of three parts, firstly, a comparative simulation between the proposed controller and pseudo-inverse of Jacobian matrix(PJMI) based method is firstly discussed, and then the effectiveness of the proposed adaptive controller is checked via more cases. In addition, more discussion about the superiority of the proposed method is carried out to enlighten the contribution of this chapter.
3.4.1 Simulation Setup
In this chapter, the initial value of joint angles is set as \(\theta _0=[0,\pi /3,0,\pi /3,0,\pi /3,0]^{\text{ T }}\)rad, and the corresponding coordinate of the end-effector is noted as P0. The initial value of joint velocity is set as \(\dot{\theta }_0=[0,0,0,0,0,0,0]^{\text{ T }}\)rad/s. The contact surface is defined as a horizontal plane with \(z=0.094\)m, and the physical parameters of the interaction model are set as \(K_p=5000\), \(K_d=20\), respectively. The limitations of joint angles and velocities are selected as \(\theta ^-=[-2,-2,-2,-2,-2,-2,-2]^{\text{ T }}\)rad, \(\theta ^+=[2,2,2,2,2,2,2]^{\text{ T }}\)rad, \(\dot{\theta }^-=[-2,-2,-2,-2,-2,-2,-2]^{\text{ T }}\)rad/s and \(\dot{\theta }^+=[2,2,2,2,2,2,2]^{\text{ T }}\)rad/s, respectively. The control gains of the proposed ARNN are set as \(\varepsilon =0.002\), \(\varGamma _1=diag(5000,3000)\), \(\varGamma _2=100I\), \(\alpha =8\), respectively.
3.4.2 Comparative Simulation Between PJMI Methods
Firstly, a comparative simulation between the proposed control strategy and traditional Jacobian-inverse based methods is carried out to show the superiority of the RNN based controller. The robot is expected to provide a contact force of 20N at a fixed point \(P1=[0.2,0.6,0.094]^{\text{ T }}\)m, without considering the orientation control of the end-effector. In traditional PJMI based methods, the joint commands are obtained by directly calculating the inverse of the Jacobian matrix online, and only the special solution is considered. Simulation results are shown in Fig. 3.4. Both controllers can guarantee the convergence of positional and force errors. Using the same control gain in the outer loop, although the controller based on PJMI achieves a faster convergence of control errors, its output is big at the beginning of the simulation (with the Euclidean norm of joint velocity being about 20 rad/s), moreover, as shown in Fig. 3.4c, the joint angle \(\theta _6\) exceeds its upper bound during 0.2–1 s. In contrast, using the RNN based controller, both joint angles and velocities are ensured not to exceed their limits. It is worth noting that at about \(t=0.6\)s, \(\theta _6\) reaches its upper limit (Fig. 3.4e), correspondingly, the joints move a relatively big range, as shown in Fig. 3.4f, as a result, \(\theta _6\) stops increasing and then converges to a group of certain values via self-motion.
3.4.3 Force Control Along Predefined Trajectories with Model Uncertainties
In this subchapter, we will carry out a group of experimental tests to further verify the validity of RNN based admittance controller (3.11). In terms of the interaction parameters, we assume the nominal values of \(K_p\) and \(K_d\) is 4500 and 15, respectively. As to the kinematic parameters, the nominal value of \(a_k\) is set to be \(\hat{a}_k(0)=[\hat{D}_1(0),\hat{D}_3(0),\hat{D}_5(0),\hat{D}_7(0)]^{\text{ T }}=[0.36,0.4,0.42,0.25]^{\text{ T }}\)m.
(1) Force Control On Fixed Points
Similar to Sect. 3.4.2, a motion-force control at fixed points is studied first. When simulation begins, the target point is set as \(P1=[0.2,-0.6,0.094]^{\text{ T }}\)m, at \(t=5\)s, the target point is reset to \(P2=[0.2,-0.4,0.094]^{\text{ T }}\)m. During the simulation, the contact force between the end-effector and contact surface is selected as \(F_{\text{ d }}=20\)N. Numerical results are shown in Figs. 3.5 and 3.6.
The position error when the simulation begins is about 0.2 m, accordingly, the proposed RNN based controller generates a large output, which ensures the quick convergence of both motion and force errors. The stabilization time is about 0.5 s. At \(t=5\)s, the target point is switched to P2, leading to an instantaneous change of position error. Using the adaptive admittance controller (3.11), the robot adjusts its joint configurations quickly and then slows down with the convergence of errors. It is remarkable that the second joint reaches its maximum value, and during the whole process, the joint velocities are guaranteed not to exceed the predefined limits. The estimated values of \(\hat{K}_p\) and \(\hat{K}_d\) are shown in Fig. 3.5f, although the exact values of \(K_d\) and \(K_p\) are unknown, by updating \(\hat{K}_p\) and \(\hat{K}_p\) online according to (3.11), precise control of control is achieved. The difference between the task-space speed \(\dot{x}\) and its estimate value \(Y\hat{a}_k\) is shown in Fig. 3.5g, correspondingly, \(\hat{D}_1\), \(\hat{D}_2\), \(\hat{D}_3\) and \(\hat{D}_4\) converge to a group of constant value.
(2) Force Control Along A Circular Path
In this example, the end-effector is controlled to offer constant contact force \(F_{\text{ d }}=20\)N while tracking a circular trajectory on the contact surface, this trajectory is defined as \(x_{\text{ d }}=[-0.1+0.1cos(0.5t),-0.6-0.1sin(0.5t),0.094]^{\text{ T }}\)m, and the orientation is required to remain the same as the initial state. Numerical results are shown in Figs. 3.7 and 3.8. As shown in Fig. 3.7a, the robot tracks the desired path successfully, and both position and orientation errors converge to zero in less than 1 s, the expected contact force is also obtained. Because of the periodicity of the desired commands, the robot\('\)s joint angles and angular velocities change periodically, at the same time, boundedness of \(\theta \) and \(\dot{\theta }\) is also guaranteed. On the other hand, the smooth change of \(\theta \) and \(\dot{\theta }\) shows that the proposed controller is very stable. Based on the adaptive strategy (3.11d), the system shows great robustness against uncertain system parameters.
(3) Force Control Along A Rhodonea Path
In this example, we consider the case where the robot provides a time-varying contact force while tracking a Rhodonea path. The desired contact force is set to be \(F_{\text{ d }}=20+5sin(0.2t)\)N, and the Rhodonea path is defined as
Numerical results are shown in Figs. 3.9 and 3.10. Figure 3.9a, b show the positional and orientational errors the end-effector respect to the desired path, respectively. In the steady-state, accurate motion control is realized using the proposed controller, and the operation force between the end-effector and contact surface is shown in Fig. 3.9c. At the beginning stage, the joint speed is high, which enables the end-effector to move toward the desired path rapidly. As the end-effector approached the expected path, the robot moves at a low speed periodically and smoothly, correspondingly, the joint angles changes at the same frequency. The Euclidean norm of \(Y\hat{a}_k-\dot{x}\) is illustrated in Fig. 3.9g, the proposed RNN based control strategy could calculate control command \(\omega \) online with subject to model uncertainties. The estimated model parameters are given in Fig. 3.10f, h.
3.4.4 Comparison
To further illustrate the contribution of the proposed force control strategy, we provide a comparison between the proposed method and the existing related methods, as shown in the Table 3.1. In [11], an adaptive admittance control scheme based on neural network approximation capability is proposed and the admittance adaptive tracking error is optimized. However, no physical constraints are considered. In [10], although the established impedance controller can guarantee input saturation, the controller still needs to calculate the pseudo-inverse of the jacobian. In [25, 32], a pseudo-inverse-free controller based on RNN is designed to realize the task space tracking of redundant robots, and its convergence is proved. Convex optimization and non-convex optimization are also obtained. Considering physical uncertainties, two different adaptive strategies are proposed in [21, 22]. This is the first RNN-based force controller in this chapter. On the other hand, this control scheme is suitable for the case of model uncertainty, so it is no longer necessary to calculate the pseudo-inverse of the Jacobian matrix and has great application potential in force control.
3.5 Summary
In this chapter, we propose an adaptive admittance control method for redundant robots based on a recursive neural network. The convergence of the adaptive RNN is proved by the theoretical derivation of the Lyapunov technique, and the effectiveness of the control strategy is verified by numerical simulation on the 7-DOF robot iiwa. Compared with the existing control methods, the controller not only has better performance in dealing with physical constraints but also has better performance in eliminating pseudo-inversion calculation. Finally, it is worth noting that this is the first time that an RNN-based approach has been extended to force control for the redundant manipulator, especially for model uncertainty manipulators. The research of this subject is of great significance to grinding robot, assembly robot and industrial application.
References
C. Yang, Y. Jiang, W. He, J. Na, Z. Li, B. Xu, Adaptive Parameter Estimation and Control Design for Robot Manipulators With Finite-Time Convergence. IEEE Transactions on Industrial Electronics 65(10), 8112–8123 (2018)
L. Cheng, Z.G. Hou, M. Tan, Adaptive neural network tracking control for manipulators with uncertain kinematics, dynamics and actuator model. Automatica 45(10), 2312–2318 (2009)
Y. Pan, H. Wang, X. Li, H. Yu, Adaptive Command-Filtered Backstepping Control of Robot Arms With Compliant Actuators. IEEE Transactions on Control Systems Technology 26(3), 1149–1156 (2018)
N. Hogan, Impedance control - An approach to manipulation. I - Theory. II - Implementation. III - Applications. Asme Transactions Journal of Dynamic Systems & Measurement Control B 107(1), 304–313 (1985)
J. Craig, Ping Hsu and S. Sastry, “Adaptive control of mechanical manipulators,” Proceedings. 1986 IEEE International Conference on Robotics and Automation, pp. 190-195, 1986
M.H. Raibert, J.J. Craig, Hybrid Position/Force Control of Manipulator. Asme Journal of Dynamic Systems Measurement & Control 102(2), 126–133 (1981)
Y.P. Pan, X. Li, H.M. Wang, H.Y. Yu, Continuous sliding mode control of compliant robot arms: A singularly perturbed approach. Mechatronics 52(1), 127–134 (2018)
W. He, S.S. Ge, Y.N. Li, E. Chew, Neural Network Control of a Rehabilitation Robot by State and Output Feedback. Journal of Intelligent & Robotic Systems 80(1), 15–31 (2015)
J. Ren, B. Wang, M. Cai and J. Yu, “Adaptive Fast Finite-Time Consensus for Second-Order Uncertain Nonlinear Multi-Agent Systems With Unknown Dead-Zone,” IEEE Access, vol. 8, No. 1, pp. 25557-25569, 2020
W. He, Y. Dong, C. Sun, Adaptive Neural Impedance Control of a Robotic Manipulator With Input Saturation. IEEE Transactions on Systems, Man, and Cybernetics: Systems 46(3), 334–344 (2016)
C. Yang, G. Peng, Y. Li, R. Cui, L. Cheng, Z. Li, Neural Networks Enhanced Adaptive Admittance Control of Optimized Robot-Environment Interaction. IEEE Transactions on Cybernetics 49(7), 2568–2579 (2019)
C. Yang, G. Ganesh, S. Haddadin, S. Parusel, A. Albu-Schaeffer, E. Burdet, Human-Like Adaptation of Force and Impedance in Stable and Unstable Interactions. IEEE Transactions on Robotics 27(5), 918–930 (2011)
H. Wu, Y. Guan, J. Rojas, A latent state-based multimodal execution monitor with anomaly detection and classification for robot introspection. Applied Sciences. 9(6), 1072 (2019). Jan
H. Wu, Z. Xu, W. Yan, Q. Su, S. Li, T. Cheng, X. Zhou, Incremental Learning Introspective Movement Primitives From Multimodal Unstructured Demonstrations. IEEE Access. 15(7), 159022–36 (2019). Oct
H. Wu, Y. Guan, J. Rojas, Analysis of multimodal Bayesian nonparametric autoregressive hidden Markov models for process monitoring in robotic contact tasks. International Journal of Advanced Robotic Systems. 16(2), 1729881419834840 (2019). Mar 26
B. Xu, D. Yang, Z. Shi, Y. Pan, B. Chen, F. Sun, Online Recorded Data-Based Composite Neural Control of Strict-Feedback Systems With Application to Hypersonic Flight Dynamics. IEEE Transactions on Neural Networks and Learning Systems 29(8), 3839–3849 (2018)
B. Xu, D. Wang, Y. Zhang, Z. Shi, DOB-Based Neural Control of Flexible Hypersonic Flight Vehicle Considering Wind Effects. IEEE Transactions on Industrial Electronics 64(11), 8676–8685 (2017)
W.E. Dixon, Adaptive Regulation of Amplitude Limited Robot Manipulators With Uncertain Kinematics and Dynamics. IEEE Transactions on Automatic Control 52(3), 488–493 (2007)
L. Cheng , Z. G. Hou and M. Tan, “Adaptive neural network tracking control of manipulators using quaternion feedback,” 2008 IEEE International Conference on Robotics and Automation, pp. 3371-3376, 2008
D. Chen, Y. Zhang, S. Li, Tracking Control of Robot Manipulators with Unknown Models: A Jacobian-Matrix-Adaption Method. IEEE Transactions on Industrial Informatics 14(7), 3044–3053 (2018)
Z. Xu, S. Li, X. Zhou, W. Yan, T. Cheng, D. Huang, Dynamic neural networks based kinematic control for redundant manipulators with model uncertainties. Neurocomputing 329(1), 255–266 (2019)
Y. Zhang, S. Chen, S. Li, Z. Zhang, Adaptive Projection Neural Network for Kinematic Control of Redundant Manipulators With Unknown Physical Parameters. IEEE Transactions on Industrial Electronics 65(6), 4909–4920 (2018)
J. Na, M. Nasiruddin, H. Guido, X. Ren, B. Phil, Robust adaptive finite-time parameter estimation and control for robotic systems. International Journal of Robust & Nonlinear Control 25(16), 345–3071 (2015)
H. Wang, P. Shi, H. Li, Q. Zhou, Adaptive Neural Tracking Control for a Class of Nonlinear Systems With Dynamic Uncertainties. IEEE Transactions on Cybernetics 47(10), 3075–3087 (2017)
S. Li, Y. Zhang, L. Jin, Kinematic Control of Redundant Manipulators Using Neural Networks. IEEE Transactions on Neural Networks and Learning Systems 28(10), 2243–2254 (2017)
Y. Zhang, Inverse-free computation for infinity-norm torque minimization of robot manipulators. Mechatronics 16(3), 177–184 (2006)
D. Chen, S. Li, Q. Wu, X. Luo, Super-twisting ZNN for coordinated motion control of multiple robot manipulators with external disturbances suppression. Neurocomputing 371(1), 78–90 (2020)
Dechao Chen, Shuai Li, “A recurrent neural network applied to optimal motion control of mobile robots with physical constraints,” Applied Soft Computing, to be published, 2019, doi: https://doi.org/10.1016/j.asoc.2019.105880.
Y. Zhang, S. Ge, T. Lee, “A unified quadratic-programming-based dynamical system approach to joint torque optimization of physically constrained redundant manipulators,” IEEE Transactions on Systems, Man, & Cybernetics. Part B (Cybernetics) 34(5), 2126–2132 (2004)
S. Li, M. Zhou, X. Luo, Z. You, Distributed Winner-Take-All in Dynamic Networks. IEEE Transactions on Automatic Control 62(2), 577–589 (2017)
Y. Zhang, S. Li, J. Gui, X. Luo, Velocity-Level Control With Compliance to Acceleration-Level Constraints: A Novel Scheme for Manipulator Redundancy Resolution. IEEE Transactions on Industrial Informatics 14(3), 921–930 (2018)
L. Jin, S. Li, H.M. La, X. Luo, Manipulability Optimization of Redundant Manipulators Using Dynamic Neural Networks. IEEE Transactions on Industrial Electronics 64(6), 4710–4720 (2017)
B. Cai, Y. Zhang, Different-Level Redundancy-Resolution and Its Equivalent Relationship Analysis for Robot Manipulators Using Gradient-Descent and Zhang’s Neural-Dynamic Methods. IEEE Transactions on Industrial Electronics 59(8), 3146–3155 (2012)
J. Na, X.M. Ren, D.D. Zheng, Adaptive Control for Nonlinear Pure-Feedback Systems With High-Order Sliding Mode Observer. IEEE Transactions on Neural Networks and Learning Systems 24(3), 370–382 (2013)
S. Li, S. Chen, B. Liu, Y. Li, Y. Liang, Decentralized kinematic control of a class of collaborative redundant manipulators via recurrent neural networks. Neurocomputing 91(1), 1–10 (2012)
H. Wang, K. Liu, X. Liu, B. Chen, C. Lin, Neural-Based Adaptive Output-Feedback Control for a Class of Nonstrict-Feedback Stochastic Nonlinear Systems. IEEE Transactions on Cybernetics 45(9), 1977–1987 (2015)
Y. Li, S. Li, B. Hannaford, A Model based Recurrent Neural Network with Randomness for Efficient Control with Applications. IEEE Transactions on Industrial Informatics 15(4), 2054–2063 (2019)
X. Li, Z. Xu, S. Li, H. Wu, X, Zhou, “Cooperative Kinematic Control For Multiple Redundant Manipulators Under Partially Known Information Using Recurrent Neural Network”. IEEE ACCESS 8(1), 40029–40038 (2020)
Z. Xu, S. Li, X. Zhou, T. Cheng, Dynamic Neural Networks Based Adaptive Admittance Control for Redundant Manipulators with Model Uncertainties. Neurocomputing 357(1), 271–281 (2019)
Z. Xu, S. Li, X. Zhou, W. Yan, T. Cheng, H. Dan, "Dynamic Neural Networks for Motion-Force Control of Redundant Manipulators: An Optimization Perspective", IEEE transactions on industrial electronics. Early access (2020). https://doi.org/10.1109/TIE.2020.2970635
H. Khalil, Nonlinear Systems (Prentice Hall, New Jersey, USA, 1996)
Zhijia Zhao, Xiuyu He, Zhigang Ren, Guilin Wen, Boundary Adaptive Robust Control of a Flexible Riser System with Input Nonlinearities. IEEE Transactions on Systems, Man, and Cybernetics: Systems 49(10), 1971–1980 (2019). https://doi.org/10.1109/TSMC.2018.2882734
Zhijia Zhao, Choon Ki Ahn, Han-Xiong Li. “Deadzone Compensation and Adaptive Vibration Control of Uncertain Spatial Flexible Riser Systems”. IEEE/ASME Transactions on Mechatronics, in press, DOI:https://doi.org/10.1109/TMECH.2020.29755672020.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
Copyright information
© 2020 The Author(s)
About this chapter
Cite this chapter
Zhou, X., Xu, Z., Li, S., Wu, H., Cheng, T., Lv, X. (2020). RNN Based Adaptive Compliance Control for Robots with Model Uncertainties. In: AI based Robot Safe Learning and Control. Springer, Singapore. https://doi.org/10.1007/978-981-15-5503-9_3
Download citation
DOI: https://doi.org/10.1007/978-981-15-5503-9_3
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-5502-2
Online ISBN: 978-981-15-5503-9
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)