Abstract
In this paper, a sliding mode (SM)-based online fault compensation control scheme is investigated for modular reconfigurable robots (MRRs) with actuator failures via adaptive dynamic programming. It consists of a SM-based iterative controller, an adaptive robust term and an online fault compensator. For fault-free MRR systems, the SM surface-based Hamilton–Jacobi–Bellman equation is solved by online policy iteration algorithm. The adaptive robust term is added to guarantee the reachable condition of SM surface. For faulty MRR systems, the actuator failure is compensated online to avoid the fault detection and isolation mechanism. The closed-loop MRR system is guaranteed to be asymptotically stable under the developed fault compensation control scheme. Simulation results verify the effectiveness of the present fault compensation control approach.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
As modular reconfigurable robots (MRRs) take advantages of structural flexibility, low cost, excellent adaptability, etc., they often work in perilous and complex working environments, such as disaster rescue, deep space/sea exploration, smart manufacturing, and many other hazardous environments that human cannot involve directly [1,2,3].
In recent years, several researches on MRRs control approaches have been attracted a great deal of attention, such as centralized control [4, 5], distributed control [6, 7], and decentralized control [8, 9]. These control approaches mainly tackled force/position control problems [10, 11], fault tolerant control problems [12, 13], and so on. Despite above methods have achieved good performance, the designed controller always contain some adjustable parameters, which increase the design difficulty and structural complexity. Thus, more attention should be paid to simplify the control structure and reduce the computational burden.
As we know, the optimal control is one of the key requirements in modern control theory. It can not only ensure the stability of the control system, but also obtain proper optimal performance. Since the strong self-learning and optimization ability, adaptive dynamic programming (ADP) [14] was introduced and extensively investigated in optimal control by solving the Hamilton–Jacobi–Bellman equation (HJBE) without the “curse of dimensionality”. Therefore, more and more ADP-based control methods were investigated to deal with trajectory tracking [15, 16], zero-sum games [17], uncertainties [18], and actuator saturation [19], etc. It is significant to point out that reinforcement learning (RL) and ADP are almost in the same spirit when dealing with optimal control problems. Therefore, RL is often regarded as the synonym for ADP. Up to now, a lot of ADP and RL methods have been investigated [20,21,22,23]. Bai et al. designed an optimal control approach via the neural network (NN) technique and RL algorithm to tackle the nonstrict-feedback control problem [24] and input saturation [25]. For systems with known dynamics, Shi et al. [26] proposed an optimal tracking control (OTC) approach to handle time delay problems via integral RL and value iteration method. In [27], a novel approximate OTC strategy was addressed using an event-driven ADP algorithm to handle such problems. However, the aforementioned methods require accurate system dynamics, which is difficult to obtain in real industrial applications of MRRs. Recently, some model-free RL-based control methods have been presented. These control approaches depend merely on the input and the output measurement data of the controlled plant [28]. Actually, the model-free methods require large online or offline data to train NN, which wastes computation and training time.
Furthermore, MRRs work in hazardous environments for a long time may lead to the occurrence of failures, which can not only degrade the system performance, but even damage the surrounding workspace. As we all know, the actuator failure is always regarded as one of challenging failures to handle, because the occurrence of unknown actuator failures may easily cause serious deterioration of the system control performance compared with other fault scenarios. Furthermore, it is unrealizable to repair MRRs in hazardous environments. Hence, exploring a fault tolerant control (FTC) method is imperative to guarantee MRRs to continue working reliable in the presence of actuator failures.
FTC strategies mainly include passive FTC (PFTC) and active FTC (AFT-C). PFTC does not need fault detection and identification (FDI) unit. Over the last few decades, many PFTC approaches have been presented mainly based on quantitative feedback theory [29], linear matrix inequality [30], and \({H}_{\infty }\) theory [31]. PFTC designs a fixed controller before fault occurs, thus it can only solve the known faults [32]. Alternatively, AFTC can effectively avoid the drawback of PFTC. AFTC obtains fault information via FDI, then readjusts or reconstructs the control law. Various AFTC approaches can be categorized into fault accommodation [33], fault reconfiguration [34], and fault compensation [35]. Owing to the better performance of AFTC, it is potential in robot manipulators [36], quadrotors [37], inverted pendulums [38] and other practical applications. Moreover, some FTC schemes have been developed through RL or ADP. Zhao et al. [32] employed the information of the fault observer to construct an improved cost function and utilized online iteration algorithm to develop a novel FTC method for nonlinear systems. Fan and Yang [39] investigated an FTC strategy to handle the time-varying actuator bias faults via ADP. In [40], an ADP-based stabilizing scheme for nonlinear systems with unknown actuator saturation was developed via NN compensation. However, these literatures have solved stabilizing problems, rather than trajectory tracking, which is feasible to MRRs.
To get rapid response and convergence, sliding mode-based control schemes have been presented. Owing to the low sensitivity and robustness to system uncertainties and external disturbances, sliding mode control (SMC) reduces the necessity of accurate model, and is feasible to apply to design control systems no matter in normal or faulty conditions [41,42,43]. Hence, SMC methods always have been applied to systems with high nonlinearities, variable parameters and external disturbances, such as aircraft systems [44], direct current (DC) servomotors [45], multi-machine power systems [46] and MRRs [12].
Although previous ADP-based FTC methods can guarantee the stability of faulty system, we further require a faster control action in practice. Thus, motivated by [47], this paper develops a SM-based online fault compensation control (SMOFCC) scheme for MRRs with unknown actuator failures. For the fault-free case, the SM-based approximate optimal control (SMAOC) is derived using the SM-based iterative controller and an adaptive robust term. When the actuator failures occurs, the SMOFCC is obtained by adding an online fault compensator to SMAOC. The main contribution and novelties of this work are presented as follows.
-
(1)
The scheme extends the ADP-based SMC method to FTC problem for MRRs with unknown actuator failures, and the online fault compensation is achieved without FDI.
-
(2)
The proposed SMOFCC scheme, which is composed of SM-based iterative controller, an adaptive robust term and a fault compensator, can guarantee the MRR system to be asymptotically stable, rather than ultimately uniformly bounded (UUB) [3, 32, 39, 40].
-
(3)
By employing the SMC technique, the developed SMOFCC has a faster control response compared to that based on tracking error feedback only [3].
The rest of this paper is organised as follows. In the next section, we present the problem statement. In the subsequent section, the SM-based control scheme for MRRs in fault-free case is presented. Then, an online fault compensator is developed to obtain the FTC, and the stability analysis is provided. The numerical simulation demonstrates the effectiveness of the SMOFCC before the final sections. In the last section, a brief conclusion is drawn.
Problem statement
The n-DOF (degree of freedom) MRR system with unknown actuator failures can be described by
where \(q \in {{\mathbb {R}}^n}\) denotes the vector of joint displacements, \(M(q) \in {{\mathbb {R}}^{n \times n}}\) denotes the nonsingular symmetric inertia matrix, \(C(q,{\dot{q}}){\dot{q}} \in {{\mathbb {R}}^n}\) denotes the Coriolis and centripetal force, \(G(q) \in {{\mathbb {R}}^n}\) denotes the gravity term, and \(u \in {{\mathbb {R}}^n}\) denotes the joint input torque, \(f_{a} \in {{\mathbb {R}}^n}\) represents the unknown additive actuator failure.
Define the system state as \(x = {[{x_1},{x_2}]^{\mathsf {T}}} = {[q,{\dot{q}}]^{\mathsf {T}}}\), the MRR system (1) can be presented as
where \(x \in {{\mathbb {R}}^{2n}}\) and \(y \in {{\mathbb {R}}^{n}}\) are the state and the output vectors, respectively, \(f(x)={{M}^{-1}}(q)[-C(q,{\dot{q}})-G(q)]\) and \(g(x)={{M}^{-1}}(q)\).
Assumption 1
The nonlinear functions \(f(\cdot )\) and \(g(\cdot )\) are Lipschitz continuous with \(f(0) = 0\), i.e., \(x=0\) is the equilibrium point of system (2), and system (2) is controllable.
Assumption 2
The desired reference trajectory \({q_d}\), the velocity vector \(\dot{q_d}\), and the acceleration vector \({\ddot{q_d}}\) are norm-bounded as [15]
where \(q_{\kappa } >0 \) is a known constant.
Assumption 3
The actuator failure \(f_a\) is norm-bounded as \(\left\| f_a \right\| \le \varsigma _{M}\), where \(\varsigma _{M}\) is a positive constant.
For the fault-free case of the MRR system (2), i.e., \(f_a = 0\), we define the nominal system as
where \({u_0}\) is a SMAOC law.
The tracking error is defined as
where \({x_\vartheta } = {[{q_\vartheta },{{\dot{q}}_\vartheta }]^{{\mathsf {T}}}}\) is the desired reference trajectory. Thus, the time derivative of the tracking error (4) becomes
To accelerate the convergence rate, we introduce the SM surface as
where \(\varLambda \) is a positive definite matrix.
The time derivative of (6) is
where \(\varphi =-{{\ddot{x}}_{\vartheta }}+\varLambda {\dot{e}}\).
To realize the approximate optimal control, the SM-based iterative controller \({u_s}\) is used to make the trajectory tracking error converge to the steady state, then the cost function is defined as
where \(Z(s,{u_s})={{s}^{{\mathsf {T}}}}Qs+u_{s}^{{\mathsf {T}}}R{{u}_{s}}\), \(J( s(t)) \ge 0\) for arbitrary s and \(u_s\), and \(J(0) = 0\). \(Q\in {{{\mathbb {R}}}^{n\times n}}\), \(R\in {{{\mathbb {R}}}^{m\times m}}\) are positive definite matrices.
Remark 1
From (6), we can observe that the SM surface consists of the position tracking error e and the velocity tracking error \({\dot{e}}\), rather than position tracking error e only. Thus, compare the optimal control with the SM signal to that with position tracking error feedback only, the optimal control with SM has a faster convergence and smaller overshoot. Furthermore, the SMC has low sensitivity and strong robustness to system uncertainties, and is easily implemented in practice.
Online fault compensation control design and stability analysis
Sliding mode-based HJBE
Definition 1
Considering the nominal MRR system (3), a SM-based iterative control strategy \(\mu (s)\in \varPsi (\varOmega )\) is defined to be admissible subject to (8) on a compact set \(\varOmega \), if \(\mu (s)\) is continuous on \(\varOmega \) with \(\mu (0)=0\), \(\mu (s)\) ensures system (2) to be convergence on \(\varOmega \), and J(s) is finite, \( \forall s\in \varOmega \) [3, 27, 32].
For each admissible control strategy \(\mu (s)\in \varPsi (\varOmega )\) of system (3), where \(\varPsi (\varOmega )\) is the set of admissible control, if the cost function (8) is continuously differentiable, then the nonlinear Lyapunov equation can be derived as
where \(\nabla J(s)=\frac{\partial J(s)}{\partial s}\).
The Hamiltonian is defined as
and the optimal cost function can be defined as
Based on the Bellman principle of optimization, \({{J}^{*}}(s)\) satisfies the HJBE
Since \(\frac{\partial H( s,u_{s}^{*},\nabla {{J}^{*}}(s) )}{\partial u_{s}^{*}}=0\), the optimal control law can be obtained as
By equivalent transformation, (13) becomes
Online PI algorithm
According to [36, 37, 39, 40], the solution of HJBE (12) can be approximated through the online PI algorithm when the system is in normal state. Unlike [3], the online PI algorithm is realized with the help of SM feedback signal, rather than system tracking error. The online PI algorithm is presented in Algorithm 1.
Sliding mode-based critic neural network
The cost function J(s) can be reconstructed by a single-layer NN as
where \({{W}_{c}}\in {{{\mathbb {R}}}^{M}}\) and \({{\sigma }_{c}}(s)\) denote the ideal weight vector and the activation function, respectively, M denotes the number of neurons in the hidden layer, and \({{\varepsilon }_{c}}(s)\) denotes the approximation error caused by critic NN (CNN) approximation. Then, from (15), we can obtain
According (16), the Hamiltonian (10) can be rewritten as
where \({{s}_{cH}}\) is the residual error caused by the NN approximation.
To estimate \({{W}_{c}}\), the CNN (15) is approximated as
and we can obtain \(\nabla {\hat{J}}(s)\) from (18) that
Inserting (19) into (17), we have the approximate Hamiltonian as
Denote \(\varpi =\nabla {{\sigma }_{c}}(s){\dot{s}}\), and assume that there exists a constant \({{\varpi }_{M}}>0\) such that \(\left\| \varpi \right\| \le {{\varpi }_{M}}\). By minimizing the objective function \({{E}_{c}}=\frac{1}{2}s_{c}^{{\mathsf {T}} }{{s}_{c}}\) with gradient descent algorithm, \({{{\hat{W}}}_{c}}\) should be updated by
where \({{\beta }_{c}}>0\) is the learning rate.
Define the weight approximation error as
From (17), (20) and (22), one has
Then, the weight approximation error is updated by
Inserting (16) to (13), the ideal SM-based iterative control strategy is expressed by
and it is approximated as
Theorem 1
Considering the nominal MRR system (3), if the weight vector of the SM-based CNN is tuned by (21), the weight approximation error is guaranteed to be UUB.
Proof
Choose a Lyapunov function candidate as
Taking the time derivative of (27), we have
Therefore, we can obtain \({{{\dot{L}}}_{1}}\le 0\) as long as \({{\tilde{W}}_{c}}\) lies outside the compact set \({{\varOmega }_{c}}= \{{{\tilde{W}}_{c}}: \Vert {{{\tilde{W}}}_{c}} \Vert <\frac{{{s}_{cH}}}{{{\varpi }_{M}}}\}\). Thus, the CNN weight estimation error \({{\tilde{W}}_{c}}\) is UUB. This completes the proof.
Sliding mode-based approximate optimal control
In light of previous analysis, we can design the SMAOC law as
where \({\mathrm{sgn}} (s)= [{\mathrm{sgn}}({{s}_{1}}),{\mathrm{sgn}} ({{s}_{2}}),\ldots ,{\mathrm{sgn}} ({{s}_{n}})] \in {{\mathbb {R}}^n}\), \({{k}_{1}}\) and \({{k}_{2}}\) are positive definite constant matrices, and \({\hat{\varphi }}\) is a robust term which is tuned by
where \({{\beta }_{\varphi }}>0\) is a positive definite matrix. According to the SMC theory and the proposed controller (29), the reachable condition \({{s}^{{\mathsf {T}}}} {\dot{s}}\le 0\) ensures the MRR system states reach and stay on the SM surface.
Theorem 2
Consider the nominal MRR system (3), the SM surface (6) and its time derivative (7). The tracking error of the MRR system can arrive and stay on the SM surface thereafter under the developed SMAOC law (29).
Proof
Choose a Lyapunov function candidate as
where \({\tilde{\varphi }}=\varphi -{\hat{\varphi }}\). Introducing the SMAOC law (29) into (31), we have
From Assumption 1, there exist two unknown positive constants \({{D}_{f}}\) and \({{D}_{g}}\), s.t. \(\left\| f(x) \right\| \le {{D}_{f}}\) and \(\left\| g(x) \right\| \le {{D}_{g}}\). Using Young’s inequality, (32) becomes
where \(\delta ={{\lambda }_{\min }}({{k}_{2}})-{{D}_{f}} -{{D}_{g}}\left\| {{u}_{s}} \right\| \). It implies that the system tracking errors reach the SMC and remain on it with \(\delta \ge 0\). From \({{{\dot{L}}}_{2}}\le 0\), we can see that s and \({\dot{s}}\) are all bounded. \({{{\dot{L}}}_{2}}\le -\delta \left\| s \right\| \) means that its integral exists as long as \(\int _{0}^{t}{\left\| s \right\| }\le (1/\eta )[{{L}_{2}}(0)-{{L}_{2}}(t)]\). Owing to \({{L}_{2}}(0)\) is bounded and \({{{\dot{L}}}_{2}}\) is monotonically decreasing and has a lower bound, \({{\lim }_{t\rightarrow \infty }}\int _{0}^{t}{\left\| s \right\| }d\tau \) is also bounded. Then, s(t) is asymptotically stable via the Barbalat Lemma, we have \({{\lim }_{t\rightarrow \infty }}s(t)=0\). Furthermore, e(t) converges to zero asymptotically. Therefore, the system states can arrive the SM surface in a finite time. This completes the proof.
Sliding mode-based online fault compensator
Based on the analysis of the fault-free case of MRRs, an online fault compensator is developed to ensure the closed-loop system stable when actuator failures occur, i.e., \({f_a}\ne 0\). By introducing the SMAOC law (let \(u={{u}_{0}}\)), the faulty MRR system (2) becomes
According (8), \({{J}^{*}}(s)\ge 0\) with \({{J}^{*}}(0) = 0\). Thus, \({{J}^{*}}(s)\) is a positive definite function. Then, we can obtain \({{{\dot{J}}}^{*}}(s)\) that
Combining (7) with (34), one can obtain
Considering (9), (14) and assuming there exists a positive constant \({{\varphi }_{M}}\), s.t. \(\left\| {{\tilde{\varphi }}} \right\| \le {{\varphi }_{M}}\), we have
where \(\zeta ={{\lambda }_{\min }}({{k}_{2}})-{{\varphi }_{M}}\), we can conclude that whether \({{{\dot{J}}}^{*}}(s)\) is negative or not depends on \({{f}_{a}}\). Therefore, it is expected to design an online fault compensator to guarantee the stability of the closed-loop MRR system with actuator failures.
Thus, the SMOFCC law for MRR system (2) is designed as
where \({{{\hat{f}}}_{a}}\) is estimation referring to the unknown actuator failure, which is adaptively updated by
According to aforementioned design procedure, the block diagram of designed SMOFCC is shown in Fig. 1.
Remark 2
We notice that the SMOFCC scheme (38) is developed based on the online fault compensation control technique [35], rather than the state observer technique [32]. It is worth noticing that the fault compensator (39) can not only to estimate the failure, but also compensate the NN approximation error. However, the state-based fault observer can estimate the failure only.
Stability analysis
Theorem 3
Consider the faulty MRR system (2), the CNN (15) with the updating law (21), the cost function (8) and the online compensator \({{{\hat{f}}}_{a}}\) (39). The closed-loop of faulty MRR system can be ensured to be asymptotically stable under the developed SMOFCC policy (38).
Proof
Choose a Lyapunov function candidate as
Taking the time derivative of the \({{L}_{3}}\) along with the solution of (7), and considering (14), we have
where \(\upsilon \text {=}2{{\lambda }_{\min }}({{k}_{2}})-{{D}_{f}}\). Substituting (39) into (41), one can obtain
where \({{\varGamma }_{1}}={{\lambda }_{\min }}(Q)+2{{\lambda }_{\min }}({{k}_{1}})-\frac{1}{2}\) and \({{\varGamma }_{2}} = {{\lambda }_{\min }}(R)-\frac{1}{2}D_{g}^{2}\). Thus, we can obtain that \({{{\dot{L}}}_{3}}\le 0\) when \({{\lambda }_{\min }}(Q)+2{{\lambda }_{\min }}({{k}_{1}})\ge \frac{1}{2}\), \({{\lambda }_{\min }}(R)\ge \frac{1}{2}D_{g}^{2}\), and \(\upsilon \ge 0\). Hence, asymptotic stability of the MRR tracking error is ensured with the developed SMOFCC policy. This completes the proof.
Remark 3
The difference between model-free control and model-based control lies in whether the dynamic model of the controlled plant is known or not. In this paper, the SMOFCC scheme is designed based on known system dynamics, and it surely can be extended to model-free case as long as the system dynamics is available. To achieve this goal, one strategy is to employ the observer [15] or identifier [48] to estimate the system dynamics, and then directly applied it to propose the control method. While, another way is to develop a pure model-free control method, i.e., the controller is designed directly with system input-output data [28].
Simulation study and results analysis
In this section, a 2-DOF MRR (see configuration b in [3]) is employed to verify the effectiveness of the theoretical results of SMOFCC comparatively.
The desired trajectories of two joint modules are defined as
and an unknown additive actuator failure is assumed to be
To approximate (8), we employ a CNN (18) with \(M=3\), the weight vector as \({{{\hat{W}}}_{c}}={{[{{{\hat{W}}}_{c1}},{{{\hat{W}}}_{c2}},{{{\hat{W}}}_{c3}}]}^{{\mathsf {T}}}}\), and the activation function as \({{\sigma }_{c}}={{[s_{1}^{2},{{s}_{1}}{{s}_{2}},s_{2}^{2}]}^{{\mathsf {T}}}}\), respectively. Related initial weight vectors and control parameters are listed in Table 1.
To show the superiority of the proposed scheme, we compare the control performance of the developed SMOFCC scheme with the existing optimal control scheme that based on tracking error feedback only in [3]. Figures 2, 3, 4 and 5 illustrate the simulation results under the proposed control strategy, and Figs. 6, 7, 8 and 9 depict the simulation results under the control method in [3].
The designed compensator (39) is employed to estimate the fault amplitude online. From Figs. 2 and 6, we can observe that the estimated failure tracks the actual failure within less than 1 s after the system runs with the developed SMOFCC scheme, while it takes near 10 s with the control scheme in [3]. Moreover, the SMOFCC can obtain a smaller overshoot. That is to say, the estimated fault amplitude with the proposed compensator (39) has a smaller bias and a higher accuracy. Figure 3 shows that the actual trajectories track the desired ones within 3 s under the proposed scheme, and a faster convergence rate of MRR is obtained compared to that in [3]. Meanwhile, compared to Fig. 7, Fig. 3 shows that SMOFCC provides a faster convergence at the beginning of the system runs in fault free scenario. We can see from Fig. 4 that the tracking errors gradually decrease and reach steady state after 4 s, which indicates the above results more intuitively. Figures 5 and 9 illustrate the control inputs of two different control methods, respectively. We can see that the control input of joint 1 under the SMOFCC has a slight change after the actuator failure occurs at \(t=30\) s due to the online fault compensation. From above simulation results, we can see that the SMOFCC has a faster convergence rate, a smaller overshoot and a higher accuracy. That is because the SMOFCC scheme introduces the SMC technique which combines the position tracking error e and the velocity tracking error \({\dot{e}}\) similar to the proportion and derivation controller [49]. The proportion regulation improves the system convergence rate and reduces the system error, and derivation reduces system overshoot and tuning time. Besides, the FDI unit is removed since introducing the online fault estimation. Therefore, the fault diagnosis time is greatly reduced and a good fault tolerance performance can be obtained in spite of the actuator failure occurs. Furthermore, the closed-loop MRR system is asymptotically stable, rather than UUB. The system states can be recovered by the online fault compensation after the actuator failure occurs. Thus, the tracking performance under the designed SMOFCC is superior to that in [3]. In summary, the proposed control scheme performs a better tracking and fault tolerant performance for MRRs by introducing SM surface.
Conclusion
In this paper, we propose SMOFCC scheme which extends the ADP-based control with SMC technique to solve the FTC problem of MRRs with unknown actuator failures. The SMOFCC consists of the SM-based iterative controller, an adaptive robust term and an online fault compensator. Thus, the prior nominal controller that relies on the knowledge of accurate dynamic model can be relaxed based on the SMC technique. Hereafter, the stability of the closed-loop MRR system is ensured to be asymptotically stable, rather than UUB. Based on the online estimated actuator failures, the proposed SMOFCC scheme removes the FDI unit. Comparable simulation results show that the developed scheme can provide a faster convergence and a less overshoot than existing optimal control methods which were developed based on tracking error feedback only. In the future work, the approximated optimal FTC problems for MRRs with other fault scenarios and noises will be further considered.
References
Gams A, Nemec B, Ijspeert AJ, Ude A (2014) Coupling movement primitives: interaction with the environment and bimanual tasks. IEEE Trans Robot 30(4):816–830
Christoph HB, Jamie P (2018) Mori: a modular origami robot. IEEE/ASME Trans Mechatron 22(5):2153–2164
Li Y, Xia H, Zhao B (2018) Policy iteration algorithm based fault tolerant tracking control: an implementation on reconfigurable manipulators. J Electr Eng Technol 13(4):1740–1751
Meister E, Gutenkunst A, Levi P (2013) Dynamics and control of modular and self-reconfigurable robotic systems. Int J Adv Intell Syst 6(1&2):66–78
Kirchoff S, Melek WW (2008) Distributed control of modular and reconfigurable robot with torque sensing. Robotica 26(1):75–84
Li Z, Melek WW, Clark C (2013) Distributed fault detection for modular and reconfigurable robots with joint torque sensing: a prediction error based approach. Mechatronics 23(6):607–616
Kirchoff S, Melek WW (2007) A saturation-type robust controller for modular manipulators arms. Mechatronics 17(4):175–190
Li Z, Melek WW, Clark C (2009) Decentralized robust control of robot manipulators with harmonic drive transmission and application to modular and reconfigurable serial arms. Robotica 27(2):291–302
Zhu M, Li Y (2010) Decentralized adaptive fuzzy sliding mode control for reconfigurable modular manipulators. Int J Robust and Nonlinear Control 20(4):472–488
Li Y, Ding G, Zhao B (2016) Decentralized adaptive neural network sliding mode position/force control of constrained reconfigurable manipulators. J Cent South Univ 23:2917–2925
Zhou F, Dong B, Li Y (2017) Torque sensorless force/position decentralized control for constrained reconfigurable manipulator with harmonic drive transmission. Int J Control Autom Syst 15(5):2364–2375
Zhao B, Li C, Ma T, Li Y (2015) Multiple faults detection and isolation via decentralized sliding mode observer for reconfigurable manipulator. J Electr Eng Technol 10(6):2393–2405
Zhao B, Li Y, Liu D (2017) Self-tuned local feedback gain based decentralized fault tolerant control for a class of large-scale nonlinear systems. Neurocomputing 235:147–156
Werbos PJ (1992) Approximate dynamic programming for real-time control and neural modeling. In: Handbook of intelligent control: neural, fuzzy, and adaptive approaches, chapter 13
Zhao B, Liu D (2020) Event-triggered decentralized tracking control of modular reconfigurable robots through adaptive dynamic programming. IEEE Trans Ind Electron 67(4):3054–3064
Zhao B, Luo F, Lin H, Liu D (2021) Particle swarm optimized neural networks based local tracking control scheme of unknown nonlinear interconnected systems. Neural Netw 134:54–63
Fu Y, Fu J, Chai T (2015) Robust adaptive dynamic programming of two-player zero-sum games for continuous-time linear systems. IEEE Trans Neural Netw Learn Syst 26(12):3314–3319
Zhang Q, Zhao D, Wang D (2018) Event-based robust control for uncertain nonlinear systems using adaptive dynamic programming. IEEE Trans Neural Netw Learn Syst 29(1):37–50
Yang H, Ying Li, Yuan H, Liu Z (2018) Adaptive dynamic programming for security of networked control systems with actuator saturation. Inf Sci 460–461:51–64
Yang Y, Vamvoudakis KG, Modares H, Yin Y, Wunsch DC (2020) Hamiltonian-driven hybrid adaptive dynamic programming. IEEE Trans Syst Man Cybern: Syst. https://doi.org/10.1109/TSMC.2019.2962103
Yang Y, Vamvoudakis KG, Modares H, Yin Y, Wunsch DC (2020) Safe intermittent reinforcement learning with static and dynamic event generators. IEEE Trans Neural Netw Learn Syst 31(12):5441–5455
Wen G, Philip Chen CL, Ge SS, Yang H, Liu X (2019) Optimized adaptive nonlinear tracking control using actor-critic reinforcement learning strategy. IEEE Trans Ind Inform 15(9):4969–4977
Zhao B, Liu D, Luo C (2020) Reinforcement learning-based optimal stabilization for unknown nonlinear systems subject to inputs with uncertain constraints. IEEE Trans Neural Netw Learn Syst 31(10):4330–4340
Bai W, Li T, Tong S (2020) NN reinforcement learning adaptive control for a class of nonstrict-feedback discrete-time systems. IEEE Trans Cybern 50(11):4573–4584
Bai W, Zhou Q, Li T, Li H (2020) Adaptive reinforcement learning neural network control for uncertain nonlinear system with input saturation. IEEE Trans Cybern 50(8):3433–3443
Shi J, Yue D, Xie X (2020) Adaptive optimal tracking control for nonlinear continuous-time systems with time delay using value iteration algorithm. Neurocomputing 396:172–178
Zhang K, Zhang H, Jiang H, Wang Y (2018) Near-optimal output tracking controller design for nonlinear systems using an event-driven ADP approach. Neurocomputing 309(2):168–178
Yang Y, Guo Z, Xiong H, Ding D, Yin Y, Wunsch DC (2019) Data-driven robust control of discrete-time uncertain linear systems via off-policy reinforcement learning. IEEE Trans Neural Netw Learn Syst 30(12):3735–3747
Wu S, Grimble M, Wei W (2000) QFT based robust/fault tolerant flight control design for a remote pilotless vehicle. IEEE Trans Control Syst Technol 8(6):1010–1016
Bonivento C, IsidoriA Marconi L, Paoli A (2004) Implicit fault-tolerant control: application to induction motors. Automatica 40(3):355–371
Benosman M, Lum KY (2010) Passive actuators’ fault-tolerant control for affine nonlinear systems. IEEE Trans Control Syst Technol 18(1):152–163
Zhao B, Liu D, Li Y (2017) Observer based adaptive dynamic programming for fault tolerant control of a class of nonlinear systems. Inf Sci 384:21–33
Chen W, Saif M (2006) An iterative learning observer for fault detection and accommodation in nonlinear time-delay systems. Int J Robust Nonlinear Control 16(1):1–19
Zakharov A, Zattoni E, Yu M, Jounela SLJ (2015) A performance optimization algorithm for controller reconfiguration in fault tolerant distributed model predictive control. J Process Control 34:56–69
Seron MM, Jose ADD (2014) Robust actuator fault compensation accounting for uncertainty in the fault estimation. Int J Adapt Control Signal Process 28(12):1440–1453
Mohamed AK, Yu X, Zhang Y (2018) Fault-tolerant cooperative control design of multiple wheeled mobile robots. IEEE Trans Control Syst Technol 26(2):756–764
Avram RC, Zhang X, Muse J (2018) Nonlinear adaptive fault-tolerant quadrotor altitude and attitude tracking with multiple actuator faults. IEEE Trans Control Syst Technol 26(2):701–707
Hmidi R, Brahim AB, Dhahri S, Hmida FB, Sellami A (2020) Sliding mode fault-tolerant control for Takagi–Sugeno fuzzy systems with local nonlinear models: application to inverted pendulum and cart system. Trans Inst Meas Control. https://doi.org/10.1177/0142331220949366
Fan Q, Yang G (2016) Adaptive fault-tolerant control for affine non-linear systems based on approximate dynamic programming. IET Control Theory Appl 10(6):655–663
Zhao B, Jia L, Xia H, Li Y (2018) Adaptive dynamic programming based stabilization of nonlinear systems with unknown actuator saturation. Nonlinear Dyn 93(4):2089–2103
Utkin VI (1992) Sliding modes in control and optimization. Springer, Berlin
Argha A, Li L, Su SW, Hung TN (2016) On LMI-based sliding mode control for uncertain discrete-time systems. J Frankl Inst 353(15):3857–3875
Qin J, Ma Q, Gao F, Zheng WX (2018) Fault-tolerant cooperative tracking control via integral sliding mode control technique. IEEE/ASME Trans Mechatron 23(1):342–351
Pazooki M, Mazinan AH (2017) Hybrid fuzzy-based sliding-mode control approach, optimized by genetic algorithm for quadrotor unmanned aerial vehicles. Complex Intell Syst 4(2):79–93
Kommuri SK, Rath JJ, Veluvolu KC (2018) Sliding-mode based observer-controller structure for fault-resilient control in dc servomotors. IEEE Trans Ind Electron 65(51):918–929
Sharifi E, Mazinan AH (2018) On transient stability of multi-machine power systems through Takagi–Sugeno fuzzy-based sliding mode control approach. Complex Intell Syst 4(3):171–179
Zhao B, Liu D, Alippi C (2020) Sliding-mode surface-based approximate optimal control for uncertain nonlinear systems with asymptotically stable critic structure. IEEE Trans Cybern. https://doi.org/10.1109/TCYB.2019.2962011
Na J, Lv Y, Zhang K, Zhao J (2020) Adaptive identifier-critic-based optimal tracking control for nonlinear systems with experimental validation. IEEE Trans Syst Man Cybern: Syst. https://doi.org/10.1109/TSMC.2020.3003224
Pan Y, Meng JE, Sun T, Xu B, Yu H (2017) Adaptive fuzzy PD control with stable \({H}_{\infty }\) tracking guarantee. Neurocomputing 237:71–78
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of Interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The research work described in this paper was supported partially by the National Key Research and Development Program of China under Grant 2018AAA0100203, partially by the National Natural Science Foundation of China under Grants 61973330 and 61773075, partially by the Beijing Municipal Natural Science Foundation under Grant 4212038, partially by the Open Research Project of the State Key Laboratory of Industrial Control Technology, Zhejiang University, China under Grant ICT2021B48, partially by the Fundamental Research Funds for the Central Universities under Grant 2019NTST25, and partially by the Key Scientific Research Project of Bengbu University under Grant 2019ZR01zd.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Xia, H., Guo, P. Sliding mode-based online fault compensation control for modular reconfigurable robots through adaptive dynamic programming. Complex Intell. Syst. 8, 1963–1973 (2022). https://doi.org/10.1007/s40747-021-00364-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40747-021-00364-3