Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

This chapter describes an extension of the FTC scheme described in Chap. 3 and considers Linear Parameter Varying (LPV) systems rather than LTI systems . LPV systems can be considered as an extension or generalisation of LTI systems. They represent a certain class of finite dimensional linear systems, in which the entries of the state-space matrices continuously depend on a time varying parameter vector which belongs to a bounded compact set. The objective is to synthesise an FTC scheme which will work over a wider range of operating conditions . To design the virtual control law , the varying input distribution matrix is factorised into a fixed and a varying matrix. As discussed earlier in the text, the virtual control law , designed using the ISM technique, is translated into the actual actuator commands using a CA scheme . In this way the controller is automatically ‘scheduled’ and closed-loop stability is established throughout the entire operating envelope. The FTC scheme can maintain closed-loop stability even in the presence of total failures of certain actuators, provided that redundancy is available in the system. The FTC scheme takes into account imperfect estimation of the actuator effectiveness levels and also considers an adaptive scheme for the nonlinear modulation gains to account for this estimation error. The efficacy of the FTC scheme is tested in simulation by applying it to an LPV model of a benchmark transport aircraft, previously used in the literature.

8.1 Problem Formulation

LPV methods are appealing for nonlinear plants which can be modelled as time varying systems with state dependent parameters which are measurable online. An LPV system can be defined in state-space representation form as

$$\begin{aligned} \dot{x}(t)= & {} A(\rho )x(t)+B(\rho )u(t)\end{aligned}$$
(8.1)
$$\begin{aligned} y(t)= & {} C(\rho )x(t)+D(\rho )u(t) \end{aligned}$$
(8.2)

where the matrices are of appropriate dimensions and the time varying parameter vector \(\rho (t)\) lies in a specified bounded compact set. In (8.1) and (8.2), the matrix entries change according to the parameter vector \(\rho (t)\). If all the system states are available, then a suitable state feedback controller \(u(t)=-Fx(t)\) can be designed in order to achieve the desired performance (and closed-loop stability) of the system

$$\begin{aligned} \dot{x}(t)=(A(\rho )-B(\rho )F)x(t) \end{aligned}$$

for all the admissible values of \(\rho (t)\) in a compact set. To account for actuator faults or failures, the linear parameter varying plant in (8.1) can be represented as

$$\begin{aligned} \dot{x}(t)=A(\rho )x(t)+B(\rho )W(t)u(t) \end{aligned}$$
(8.3)

where \(A(\rho )\in \mathop {\text {I}\!\text {R}}\nolimits ^{n\times n}\), \(B(\rho )\in \mathop {\text {I}\!\text {R}}\nolimits ^{n\times m}\) and \(W(t)\in \mathop {\text {I}\!\text {R}}\nolimits ^{m\times m}\) is a diagonal semi-positive definite weighting matrix whose diagonal entries \(w_1(t),\ldots ,w_m(t)\) model the efficiency level of the actuators. As throughout the text, if \(w_i(t)=1\) it means that the ith actuator is working perfectly and is fault-free, whereas if \(1> w_i(t)> 0\) some level of fault is present (and that particular actuator works at reduced efficiency). If \(w_i(t)=0\) it means the ith actuator has completely failed and the actuator does not respond to the control signal \(u_i(t)\).

Assumption 8.1

The time varying parameter vector \(\rho (t)\) is assumed to lie in a specified bounded compact set \(\varOmega \subset \mathop {\text {I}\!\text {R}}\nolimits ^r\) and is assumed to be available for the controller design.

Assumption 8.2

Further assume that the varying plant matrices \(A(\rho )\) and \(B(\rho )\) depend affinely on the parameter \(\rho (t)\), that is

$$ A(\rho )=A_0+\sum _{i=1}^{r}\rho _iA_i, \quad B(\rho )=B_0+\sum _{i=1}^{r}\rho _iB_i $$

Assumption 8.3

To design the virtual control law , which is explained in the sequel, assume that the parameter varying matrix \(B(\rho )\) can be factorised as

$$\begin{aligned} B(\rho )=B_fE(\rho ) \end{aligned}$$
(8.4)

where \(B_f\in \mathop {\text {I}\!\text {R}}\nolimits ^{n\times m}\) is a fixed matrix and \(E(\rho )\in \mathop {\text {I}\!\text {R}}\nolimits ^{m\times m}\) is a matrix with varying components and is assumed to be invertible for all \(\rho (t)\in \varOmega \). This of course is a restriction on the class of systems for which the results in this chapter are applicable, but for example many aircraft systems fall into this category.

As discussed in Chap. 3, to resolve actuator redundancy , assume that by permuting the states, the matrix \(B_f\) can be partitioned as

$$\begin{aligned} B_f=\left[ \begin{array}{c} B_{1} \\ B_{2} \\ \end{array} \right] \end{aligned}$$
(8.5)

where \(B_1\in \mathop {\text {I}\!\text {R}}\nolimits ^{(n-l)\times m}\), and \(B_2\in \mathop {\text {I}\!\text {R}}\nolimits ^{l\times m}\) is of rank \(l<m\).

Assumption 8.4

It is assumed that \(\Vert B_2\Vert \gg \Vert B_1\Vert \) so that \(B_2\) provides the dominant contribution of the control action within the system as compared to \(B_1\).

Furthermore scale the last l states to ensure that \(B_2B_2^T=I_l\). This can be done without loss of generality.

Using (8.4) and (8.5), the system in (8.3) can be written as

$$\begin{aligned} \dot{x}(t)=A(\rho )x(t)+\left[ \begin{array}{c} B_1E(\rho )W(t) \\ B_2E(\rho )W(t) \\ \end{array} \right] u(t) \end{aligned}$$
(8.6)

The design of the virtual control will be based on the fault-free system i.e. when \(W(t)=I\). Define the virtual control input signal as:

$$\begin{aligned} \nu (t):=B_2E(\rho )u(t) \end{aligned}$$
(8.7)

where \(\nu (t)\in \mathop {\text {I}\!\text {R}}\nolimits ^l\) is the total control effort produced by the actuators. Using the fact \(B_2B_2^T=I_l\), one particular choice for the physical control law \(u(t)\in \mathop {\text {I}\!\text {R}}\nolimits ^m\) which is used to distribute the control effort among the actuators is

$$\begin{aligned} u(t):=(E(\rho ))^{-1}B_2^T\nu (t) \end{aligned}$$
(8.8)

Note the expression in (8.8) satisfies (8.7) since \((E(\rho ))^{-1}B_2^T\) is a right pseudo-inverse of \(B_2E(\rho )\).

Remark 8.1

The control structure in (8.8) is different from Chaps. 3 and 4, since it involves the varying matrix \(E(\rho )\).

Substituting (8.8) into (8.6) yields the state-space representation

$$\begin{aligned} \dot{x}(t)=A(\rho )x(t)+\underbrace{\left[ \begin{array}{c} B_1E(\rho )W(t)(E(\rho ))^{-1}B_2^T\\ B_2E(\rho )W(t)(E(\rho ))^{-1}B_2^T \\ \end{array} \right] }_{B_w(\rho )}\nu (t) \end{aligned}$$
(8.9)

in terms of the virtual control \(\nu (t)\). In the nominal case, when there is no fault in the system, i.e. when \(W(t)=I\), Eq. (8.9) simplifies to

$$\begin{aligned} \dot{x}(t)=A(\rho )x(t)+\underbrace{\left[ \begin{array}{c} B_1B_2^T\\ I_l \\ \end{array} \right] }_{B_{\nu }}\nu (t) \end{aligned}$$
(8.10)

exploiting the fact that \(B_2 B_2 ^T = I_l\).

Assumption 8.5

The pair (\(A(\rho ),B_{\nu }\)) is controllable for all values of \(\rho (t)\in \varOmega \).

In this chapter all the states are assumed to be available for the controller design, therefore a state feedback law \(\nu (t)=-Fx(t)\) can be designed in order to stabilise the nominal system

$$\dot{x}(t)=(A(\rho )-B_{\nu }F)x(t)$$

for all values of \(\rho (t)\in \varOmega \), as well as to achieve the desired closed-loop performance.Footnote 1 The nominal fault-free system in (8.10) is used in the next section to design the virtual control law .

8.2 Integral Sliding Mode Controller Design

This section focuses initially on the design of the sliding surface and then subsequently the control law, so that the sliding motion on the sliding surface can be sustained for all time.

8.2.1 Design of Integral Switching Function

Here the switching function suggested in Eq. (3.21) from Sect. 3.2.1 is extended to LPV plants . Choose the sliding surface as

$${\mathscr {S}}=\{x\in \mathop {\text {I}\!\text {R}}\nolimits ^{n}: \quad \sigma (t)=0\}$$

where

$$\begin{aligned} \sigma (t):=Gx(t)-Gx(0)-G\int _{0}^{t}\left( A(\rho )-B_{\nu }F \right) x(\tau )d\tau \end{aligned}$$
(8.11)

and \(G\in \mathop {\text {I}\!\text {R}}\nolimits ^{l\times n}\) represents design freedom. Here

$$\begin{aligned} G:=B_2\left( B_f^TB_f\right) ^{-1}B_f^T \end{aligned}$$
(8.12)

is suggested where \(B_f\) is defined in (8.4). With this choice of G, and using the special properties of matrix \(B_2\) (i.e. \(B_2B_2^T=I_l\)), it is easy to verify that

$$\begin{aligned} GB_{\nu } = B_2\left( B_f^TB_f\right) ^{-1}B_f^TB_fB_2^T=I_l \end{aligned}$$
(8.13)

which means that nominally when there are no faults in the system and \(W=I_m\), the special choice of G in (8.12) serves as a left pseudo-inverse of the matrix \(B_{\nu }\). Also from Eq. (8.9)

$$\begin{aligned} \nonumber GB_{w}(\rho )= & {} B_2\left( B_f^TB_f\right) ^{-1}B_f^TB_fE(\rho )W(t)(E(\rho ))^{-1}B_2^T\\= & {} B_2E(\rho )W(t)(E(\rho ))^{-1}B_2^T \end{aligned}$$
(8.14)

which will be used in the sequel when defining the control law.

Taking the time derivative of the switching function \(\sigma (t)\) along the trajectories of (8.9) yields

$$\begin{aligned} \dot{\sigma }(t)=G\dot{x}(t)-\textit{GA}(\rho )x(t)+GB_{\nu }Fx(t) \end{aligned}$$
(8.15)

and after substituting from (8.9)

$$\begin{aligned} \dot{\sigma }(t)=GB_w(\rho )\nu (t)+\underbrace{GB_{\nu }}_{I_l}Fx(t) \end{aligned}$$
(8.16)

Therefore the expression for the equivalent control  (associated with \(\dot{\sigma }(t)=0\)) can be written as

$$\begin{aligned} \nu _{{eq}}(t)= -\left( B_2E(\rho )W(t)(E(\rho ))^{-1}B_2^T\right) ^{-1}Fx(t) \end{aligned}$$
(8.17)

provided the matrix W(t) is such that \(\det (B_2E(\rho )W(t)(E(\rho ))^{-1}B_2^T)\ne 0\). Substituting (8.17) into (8.9) yields the expression for the sliding motion as

$$\begin{aligned} \dot{x}(t)=A(\rho )x(t)-B_w(\rho )\left( B_2E(\rho )W(t)(E(\rho ))^{-1}B_2^T\right) ^{-1}Fx(t) \end{aligned}$$
(8.18)

By adding and subtracting the term \(B_{\nu }Fx(t)\) to the right hand side of Eq. (8.18) yields

$$\begin{aligned} \dot{x}(t) = \left( A(\rho )-B_\nu F\right) x(t)+\left[ \begin{array}{c} \tilde{\varPhi }(t,\rho )\\ 0_l \\ \end{array} \right] Fx(t) \end{aligned}$$
(8.19)

where the term which models the uncertainty is

$$\begin{aligned} \tilde{\varPhi }(t,\rho ):=B_1B_2^T-B_1E(\rho )W(t)(E(\rho ))^{-1}B_2^T \left( B_2E(\rho )W(t)(E(\rho ))^{-1}B_2^T\right) ^{-1} \end{aligned}$$
(8.20)

Remark 8.2

From Eq. (8.20) it is clear that when there are no actuator faults in the system (i.e. \(W(t)=I_m\)), then \(\tilde{\varPhi }(t,\rho )\equiv 0\) . However in the case of faults or failures (i.e. when \(W(t)\ne I_m\)), then \(\tilde{\varPhi }(t,\rho )\ne 0\) which will be treated as unmatched uncertainty while sliding.

The closed-loop stability of the motion while sliding must be ensured in the presence of ‘uncertainty’ \(\tilde{\varPhi }(t,\rho )\). To facilitate the closed-loop stability analysis , notice Eq. (8.19) can be written as

$$\begin{aligned} \dot{x}(t) = \left( A(\rho )-B_\nu F\right) x(t)+\widetilde{B}\tilde{\varPhi }(t,\rho )Fx(t) \end{aligned}$$
(8.21)

where

$$\begin{aligned} \widetilde{B}:=\left[ \begin{array}{c} I_{n-l} \\ 0 \\ \end{array} \right] \end{aligned}$$
(8.22)

Now in order to define the class of faults or failures which the FTC scheme in this chapter can mitigate, let the diagonal entries of W(t) belong to the set

$$\begin{aligned} \mathscr {W}_{\varepsilon }=\{(w_1,\ldots ,w_m) \in \underbrace{\left[ \begin{array}{cc} 0 &{} 1 \\ \end{array} \right] \times \cdots \times \left[ \begin{array}{cc} 0 &{} 1 \\ \end{array} \right] }_{m\,\mathrm{{times}}} :(GB_{w}(\rho ))^T(GB_{w}(\rho ))>\varepsilon I\} \end{aligned}$$
(8.23)

where \(\varepsilon \) is a small positive scalar satisfying \(0<\varepsilon \ll 1\). Note when \(W(t)=I_m\), \((GB_{w}(\rho ))^T(GB_{w}(\rho ))=I>\varepsilon I\) and therefore \(\mathscr {W}_{\varepsilon }\ne \emptyset \). If the actuator effectiveness matrix \(W(t)=\mathrm {diag}\) \((w_1(t),\ldots ,w_m(t))\) \(\in \mathscr {W}_{\varepsilon }\) then by construction

$$\Vert (GB_{w}(\rho ))^{-1}\Vert =\Vert \left( B_2E(\rho )W(t)(E(\rho ))^{-1}B_2^T\right) ^{-1}\Vert <\frac{1}{\sqrt{\varepsilon }}$$

The set \(\mathscr {W}_{\varepsilon }\) will be shown to constitute the class of faults/failures for which closed-loop stability can be maintained. From (8.20) note that for any \(W(t)\in \mathscr {W}_{\varepsilon }\)

$$\begin{aligned} \Vert \tilde{\varPhi }(t,\rho )\Vert \le \gamma _1\left( 1+\frac{c}{\sqrt{\varepsilon }}\right) \end{aligned}$$
(8.24)

where \(c=\max _{\rho \in \varOmega }\Vert E(\rho )\Vert \Vert (E(\rho ))^{-1}\Vert \) (i.e. the worst case condition number associated with \(E(\rho )\)); and \(\gamma _1=\Vert B_1\Vert \), which is small by hypothesis. Proving the stability of the closed-loop sliding motion in (8.21) (in the nominal as well as in the fault/failure scenarios) is one of the important parts of the design process which is demonstrated in the following subsection.

Remark 8.3

The conditions in this chapter are subtly different to those in Chaps. 3 and 4. In (8.23) the norm of \((GB_{w}(\rho ))^{-1}\) must be guaranteed to be bounded by limiting \(W(t) \in \mathscr {W}_{\varepsilon }\,\) thus introducing an explicit \(\varepsilon \) to bound \(\Vert GB_{w}(\rho )\Vert \) away from zero. This is not necessary in Chaps. 3 and 4 and so the ‘price’ for facilitating a wider operating envelope is a slightly more restricted set of possible failures.

8.2.2 Closed-Loop Stability Analysis

In the nominal fault-free scenario when \(W(t)=I_m\), it is easy to verify \(\tilde{\varPhi }(t,\rho )=0\), and Eq. (8.21) simplifies to

$$\begin{aligned} \dot{x}(t)= \left( A(\rho )-B_\nu F\right) x(t) \end{aligned}$$
(8.25)

which is stable by design of F. However in fault/failure scenarios, closed-loop stability needs to be proven. To this end, Eq. (8.21) can also be represented by

$$\begin{aligned} \dot{x}(t) = \underbrace{(A(\rho )-B_\nu F)}_{\widetilde{A}(\rho )}x(t)+\widetilde{B}\overbrace{\tilde{\varPhi }(t,\rho )\underbrace{Fx(t)}_{\widetilde{y}(t)}}^{\widetilde{u}(t)} \end{aligned}$$
(8.26)

Define \(\gamma _2\) to be the \(\mathscr {L}_2\) gain associated with the operator

$$\begin{aligned} \widetilde{G}(s):= F(sI-\widetilde{A}(\rho ))^{-1}\widetilde{B} \end{aligned}$$
(8.27)

Proposition 8.1

For any possible combination of faults or failures belonging to the set \(\mathscr {W}_{\varepsilon }\), the closed-loop sliding motion in (8.26) will be stable if

$$\begin{aligned} \gamma _2\gamma _1\left( 1+\frac{c}{\sqrt{\varepsilon }}\right) <1 \end{aligned}$$
(8.28)

Proof

The specially written structure in (8.26) can be thought of as a feedback interconnection of an LPV plant and a time varying feedback gain associated with

$$\begin{aligned} \dot{x}(t)= & {} \widetilde{A}(\rho )x(t)+\widetilde{B}\widetilde{u}(t) \end{aligned}$$
(8.29)
$$\begin{aligned} \widetilde{y}(t)= & {} Fx(t) \end{aligned}$$
(8.30)

where

$$\begin{aligned} \widetilde{u}(t)=\tilde{\varPhi }(t,\rho )\widetilde{y}(t) \end{aligned}$$
(8.31)

If (8.28) is satisfied then according to the small gain theorem (Appendix B.1.2), if

$$\begin{aligned} \Vert \widetilde{G}(s)\Vert \Vert \tilde{\varPhi }(t,\rho )\Vert <1 \end{aligned}$$
(8.32)

the closed-loop system in (8.26) will be stable. \(\blacksquare \)

In the next subsection the ideas of integral sliding modes are used to design the virtual control law \({\nu }(t)\) in order to produce the virtual control effort.

8.2.3 ISM Control Laws

Consider the (integral sliding mode) control law

$$\begin{aligned} {\nu }(t)= (GB_{\hat{w}}(\rho ))^{-1}({\nu }_l(t)+ {\nu }_n(t)) \end{aligned}$$
(8.33)

where

$$\begin{aligned} GB_{\hat{w}}(\rho ) = B_2E(\rho ) \widehat{W}(t) (E(\rho ))^{-1}B_2^T \end{aligned}$$
(8.34)

and \(\widehat{W}(t)\) is an estimate of W(t). The linear part of the control law \({\nu }_l(t)\) in (8.33) is defined as

$$\begin{aligned} \nu _l(t):= -Fx(t) \end{aligned}$$
(8.35)

and the nonlinear discontinuous part, which enforces sliding and provides robustness against fault/failure scenarios is given by

$$\begin{aligned} {\nu }_n(t) := -\kappa (t,x) \frac{\sigma (t)}{\Vert \sigma (t)\Vert } \quad \text {for } \sigma (t) \ne 0 \end{aligned}$$
(8.36)

where \(\kappa (t,x)>0\) is an adaptive modulation function given by

$$\begin{aligned} \kappa (t,x) = \Vert F\Vert \Vert x(t)\Vert \bar{\kappa }(t,x) + \eta \end{aligned}$$
(8.37)

where \(\eta \) is a positive scalar. The positive adaptation gain \(\bar{\kappa }(t,x)\) evolves according to

$$\begin{aligned} \dot{\bar{\kappa }}(t,x) = -\varsigma _1 \bar{\kappa }(t,x) + \varsigma _2 \varepsilon _0 \Vert F\Vert \Vert x(t)\Vert \Vert \sigma (t)\Vert \end{aligned}$$
(8.38)

where \(\varsigma _1, \varsigma _2\) and \(\varepsilon _0\) are positive (design) scalar gains.

Assumption 8.6

In the analysis which follows, it is assumed the actuator efficiency level W(t) is not perfectly known but that the estimate \(\widehat{W}(t)\) satisfies

$$\begin{aligned} W(t) = \widehat{W}(t) (I+\varDelta (t)) \end{aligned}$$
(8.39)

where the diagonal matrix \(\varDelta (t)\) represents imperfections in the estimation of W(t).

Substituting (8.39) into (8.14) yields

$$\begin{aligned} GB_{w}(\rho ) = B_2E(\rho )\widehat{W}(t)(E(\rho ))^{-1}B_2^T+ B_2E(\rho )\widehat{W}(t) \varDelta (t) (E(\rho ))^{-1}B_2^T \end{aligned}$$
(8.40)

Using (8.33), Eq. (8.16) becomes

$$ \dot{\sigma }(t) = GB_w(\rho ) (GB_{\hat{w}}(\rho ))^{-1} (\nu _l(t) + \nu _n(t)) + Fx(t) $$

Substituting for (8.40) and for \(\nu _l\) from (8.35) yields

$$\begin{aligned} \dot{\sigma }(t)= & {} (I + \widehat{\varDelta }(t))(\nu _l(t) + \nu _n(t)) + Fx(t) \nonumber \\= & {} (I + \widehat{\varDelta }(t)) \nu _n(t) - \widehat{\varDelta }(t)Fx(t) \end{aligned}$$
(8.41)

where

$$\begin{aligned} \widehat{\varDelta }(t) =(B_2E(\rho )\widehat{W}^{\frac{1}{2}}(t) \varDelta (t) \widehat{W}^{\frac{1}{2}}(t) \left( E(\rho ))^{-1}B_2^T \right) (B_2E(\rho )\widehat{W}(t)\left( E(\rho ))^{-1}B_2^T \right) ^{-1} \end{aligned}$$
(8.42)

Define

$$\begin{aligned} \mathcal{D}_{\varepsilon _0} = \left\{ \varDelta (t) \text { from (8.39)} ~:~ \Vert \widehat{\varDelta }(t)\Vert < \sqrt{1-2\varepsilon _0} \right\} \end{aligned}$$
(8.43)

for some scalar \(0<\varepsilon _0 \ll 1/2\). Clearly the set \(\mathcal{D}_{\varepsilon _0}\) is not empty since \(\varDelta (t) = 0 \in \mathcal{D}_{\varepsilon _0}\). It is easy to show that if

$$\begin{aligned} \Vert \widehat{\varDelta }(t)\Vert < \sqrt{1-2\varepsilon _0} \end{aligned}$$
(8.44)

then

$$\begin{aligned} 2I_l + \widehat{\varDelta }(t) + \widehat{\varDelta }^T(t) > 2 \varepsilon _0 I_l \end{aligned}$$
(8.45)

Consider the positive definite candidate Lyapunov function

$$\begin{aligned} V(t)= \underbrace{\sigma ^T(t) \sigma (t)}_{V_1(t)} + \underbrace{\frac{1}{\varsigma _2} e^2(t)}_{V_2(t)} \end{aligned}$$
(8.46)

where

$$\begin{aligned} e(t) = \bar{\kappa }(t,x) - \frac{1}{\varepsilon _0} \end{aligned}$$
(8.47)

Since \(\Vert \widehat{\varDelta }(t)\Vert < \sqrt{1-2\varepsilon _0}\), taking the derivative of \(V_1(t)\) from (8.46), and then substituting from (8.41), yields

$$\begin{aligned} \dot{V}_1(t)= & {} -\kappa (t,x) \Vert \sigma (t)\Vert (2I_l + \widehat{\varDelta }(t)+ \widehat{\varDelta }^T(t)) -2 \sigma ^T(t)\widehat{\varDelta }(t) F x(t) \nonumber \\\le & {} - 2 \kappa (t,x) \varepsilon _0 \Vert \sigma (t)\Vert + 2 \Vert \sigma (t)\Vert \Vert \widehat{\varDelta }(t)\Vert \Vert F x(t)\Vert \nonumber \\\le & {} - 2 \kappa (t,x) \varepsilon _0 \Vert \sigma (t)\Vert + 2 \Vert \sigma (t)\Vert \sqrt{1-2\varepsilon _0} \, \Vert F\Vert \, \Vert x(t)\Vert \end{aligned}$$
(8.48)

From (8.47) it follows that \( \bar{\kappa }(t,x) = e(t) \,+\, \frac{1}{\varepsilon _0} \). Then using the fact that \(\sqrt{1-2\varepsilon _0} < 1\) and substituting (8.37) into (8.48), it follows

$$\begin{aligned} \dot{V}_1(t) \le - 2 \varepsilon _0 \Vert F\Vert \Vert x(t)\Vert \Vert \sigma (t)\Vert e(t) - 2 \eta \varepsilon _0 \Vert \sigma (t)\Vert \end{aligned}$$
(8.49)

Taking the derivative of \(V_2(t)\) from (8.46), using the fact that \(\dot{e}(t) = \dot{\bar{\kappa }}(t,x)\) from (8.47) and substituting from (8.38) gives

$$\begin{aligned} \nonumber \dot{V}_2(t)= & {} \frac{2}{\varsigma _2}e(t)\dot{e}(t)= \frac{2}{\varsigma _2}e(t)\dot{\bar{\kappa }}(t,x)\\= & {} -\frac{2\varsigma _1}{\varsigma _2} e(t) \bar{\kappa }(t,x) + 2\varepsilon _0 e(t) \Vert F\Vert \Vert x(t)\Vert \Vert \sigma (t)\Vert \end{aligned}$$
(8.50)

Therefore, from (8.49) and (8.50) and substituting for \(\bar{\kappa }(t,x)\) from (8.47) yields

$$\begin{aligned} \nonumber \dot{V}(t)= & {} \dot{V}_1(t)+\dot{V}_2(t) \\ \nonumber\le & {} -\frac{2\varsigma _1}{\varsigma _2}e(t) \bar{\kappa }(t,x) - 2 \eta \varepsilon _0 \Vert \sigma (t)\Vert \\= & {} -\frac{2\varsigma _1}{\varsigma _2 \varepsilon _0}e(t) - \frac{2\varsigma _1}{\varsigma _2}e^2(t) - 2 \eta \varepsilon _0 \Vert \sigma (t)\Vert \end{aligned}$$
(8.51)

It is easy to show that

$$ -\frac{2\varsigma _1}{\varsigma _2 \varepsilon _0}e(t) - \frac{2\varsigma _1}{\varsigma _2}e^2(t) \le \frac{\varsigma _1}{2 \varsigma _2 \varepsilon _0^2} $$

for all values of e(t) and therefore from (8.51) it follows that

$$\begin{aligned} \dot{V}(t)\le & {} \frac{\varsigma _1}{2 \varsigma _2 \varepsilon _0^2} - 2 \eta \varepsilon _0 \Vert \sigma (t)\Vert \end{aligned}$$
(8.52)

which implies that \(\sigma (t)\) moves into a boundary layer about \(\sigma (t)=0\) of size \(\frac{\varsigma _1}{4 \varsigma _2 \varepsilon _0^3 \eta }\).

Remark 8.4

The adaptation scheme in (8.37) and (8.38) makes the approach in this chapter quite different from Chaps. 3 and 4. Adaptation is required here because of the complex relationship between \(\varDelta (t)\) and \(\widehat{\varDelta }(t)\) in (8.42) and the limitations associated with (8.43).

Remark 8.5

The fact that a traditional sliding mode scheme involving a unit vector structure has been selected as the basis for the control law, has facilitated the inclusion of an adaptive scheme. An adaptive gain is highly desirable in FTC schemes to compensate for sudden significant changes to the plant.

Finally the physical control law , which is used to distribute the control effort among the available actuators is obtained by substituting (8.33)–(8.36) into (8.8) which yields

$$\begin{aligned} u(t)=-(E(\rho ))^{-1}B_2^T \left( B_2E(\rho ) \widehat{W}(t) (E(\rho ))^{-1}B_2^T \right) ^{-1}\left( Fx(t)+\kappa (\cdot )\frac{\sigma (t)}{\Vert \sigma (t)\Vert } \right) \nonumber \\ \end{aligned}$$
(8.53)

Remark 8.6

Note the physical control law in (8.53) requires an estimate of the effectiveness level of the actuators \(\widehat{W}(t)\) (see Fig. 8.1 for details). In this chapter, it is assumed that this estimate is provided by an FDI scheme  (see for example Sect. 3.3.1). This information can also be obtained by directly comparing the controller signals with the actual actuator deflection, as measured by control surface sensors, which are available in many aircraft systems.

Fig. 8.1
figure 1

Overall FTC scheme

8.2.4 Design of the State Feedback Gain

In this section, using the nominal system (8.10), the state feedback gain F will be designed. In designing F two objectives must be met: the first is equivalent to achieving pre-specified nominal performance for all admissible values of \(\rho (t)\), and the second one is to satisfy the closed-loop stability condition in (8.28) via the small gain theorem . Nominal performance will be incorporated by the use of a LQR type cost function

$$J=\int _{0}^{\infty }(x^{T}Qx+u^{T}Ru)dt$$

where Q and R are s.p.d. matrices. The LPV system matrices (\(\widetilde{A}(\rho ), \widetilde{B}, F\)) which depend affinely on the parameter vector  \(\rho (t)\) in (8.29) and (8.30) can be represented by the polytopic system  (\(\widetilde{A}(\omega _i), \widetilde{B}, F\)) where the vertices \(\omega _1, \omega _2,\ldots ,\omega _{n_\omega }\) for \(\omega _{n_w}=2^r\) correspond to the extremes of the allowable range of \(\rho (t)\in \varOmega \). Consequently

$$ \widetilde{A}(\rho )=\sum _{i=1}^{2^r}\widetilde{A}_i\delta _i \, , \quad \sum _{i=1}^{2^r}\delta _i=1, \quad \delta _i\ge 0$$

The LQR performance criteria can then be posed as an optimisation problem:

Minimise \(\text {trace }(X^{-1})\) subject to

$$\begin{aligned} \left[ \begin{array}{cc} A(\omega _i)X+XA^{T}(\omega _i)-B_{\nu }Y-Y^{T}B_{\nu }^{T} &{}(Q_1X-R_1Y)^{T} \\ Q_1X-R_1Y &{} -I \\ \end{array} \right]< & {} 0 \end{aligned}$$
(8.54)
$$\begin{aligned} X> & {} 0 \end{aligned}$$
(8.55)

where

$$\begin{aligned} Q_1=\left[ \begin{array}{cc} Q^{\frac{1}{2}}\\ 0_{l \times n} \\ \end{array} \right] {,} \quad R_1=\left[ \begin{array}{cc} 0_{n \times l} \\ R^{\frac{1}{2}} \\ \end{array} \right] ^T \end{aligned}$$
(8.56)

and \(Y:=FX\) and \(X^{-1}\in \mathop {\text {I}\!\text {R}}\nolimits ^{n\times n}\) is the Lyapunov matrix .

To satisfy the closed-loop stability condition in (8.28), it is sufficient to apply the Bounded Real Lemma at each vertex of the polytope and ensure that

$$\begin{aligned} \left[ \begin{array}{ccc} A(\omega _i)X+XA^{T}(\omega _i)-B_{\nu }Y-Y^{T}B_{\nu }^{T} &{} \tilde{B} &{} Y^{T} \\ \tilde{B}^{T} &{} -\gamma ^{2}I &{} 0 \\ Y &{} 0 &{} -I \\ \end{array} \right] <0 \end{aligned}$$
(8.57)

for \(i=1\ldots 2^r\). Since the objective is to seek a common Lyapunov matrix for the LMI formulations at each vertex, this can be achieved by introducing the slack variable \(Z\in \mathop {\text {I}\!\text {R}}\nolimits ^{n\times n}\) and posing the problem as:

Minimise \(\mathrm{{trace}}(Z)\) subject to (8.54), (8.55) and (8.57) and

$$\begin{aligned} \left[ \begin{array}{cc} -Z &{} I_n \\ I_n &{} -X \\ \end{array} \right] <0 \end{aligned}$$
(8.58)

The decision variables are X and Y. The matrix Z satisfies \(\mathrm{{trace}}(Z)\ge \mathrm{{trace}}(X^{-1}\)). Therefore the LMIs in (8.54)–(8.58) can be solved for all the vertices of the polytopic system . The state feedback matrix is obtained from the expression \(F=YX^{-1}\).

8.3 Simulations

The simulations in this chapter are based on the RECOVER benchmark model . For the controller design the LPV model of RECOVER given in Appendix A.1.1 is used. The aerodynamic coefficients are polynomial functions of velocity \(V_{tas}\) and angle of attack \(\alpha \) in the range of \( \left[ \begin{array}{cc} 150, &{} 250 \\ \end{array} \right] \) m/s and \( \left[ \begin{array}{cc} -2, &{} 8 \\ \end{array} \right] \) deg respectively, and at an altitude of 7000 m. The states of the LPV plant are \( (\bar{\alpha },\bar{q},\bar{V}_{tas},\bar{\theta },\bar{h}_e) \) which represent deviation of the angle of attack, pitch rate, true air speed, pitch angle and altitude from their trim values. The inputs of the LPV plant are \( (\bar{\delta _e},\bar{\delta _s},\bar{T_n}), \) which represent deviation of elevator deflection, horizontal stabiliser deflection and total engine thrust from their trim values respectively. The trim values of the states are

$$(\alpha _{trim},q_{trim},{V}_{tas_{trim}},\theta _{trim},h_{e_{trim}})\!=\!(1.05\,\mathrm{{deg}},\,0\,\mathrm{{deg/s}},\,227.02\,\mathrm{{m/s}},\,1.05\,\mathrm{{deg}},\,7000\,\mathrm{{m}})$$

and the trim values of the LPV plant inputs are

$$(\delta _{e_{trim}}, \delta _{s_{trim}}, T_{n_{trim}})=(0.163~\mathrm{{deg}},~ 0.590~\mathrm{{deg}},~ 42291~\mathrm{{N}})$$

For the controller design, the state \(\bar{h}_e\) is removed and the states of the LPV plant have been reordered as \((\bar{\theta },\bar{\alpha },\bar{V}_{tas},\bar{q})\). The LPV system matrices are given by

$$\begin{aligned} A(\rho )=A_0+\sum _{i=1}^{7}A_i\rho _i \quad \mathrm{{and}} \quad B(\rho )=B_0+\sum _{i=1}^{7}B_i\rho _i \end{aligned}$$
(8.59)

where

$$\begin{aligned} (\rho _1,\ldots ,\rho _7):=\left( \bar{\alpha },\bar{V}_{tas},\bar{V}_{tas}\bar{\alpha },\bar{V}^2_{tas},\bar{V}^2_{tas}\bar{\alpha },\bar{V}^3_{tas},\bar{V}^4_{tas}\right) \end{aligned}$$
(8.60)

where \(\bar{\alpha }=\alpha -\alpha _{trim}\) and \(\bar{V}_{tas}={V}_{tas}-{V}_{tas_{trim}}\). For full details of the LPV plant see the Appendix A.1.1. The input distribution matrix \(B(\rho )\) has been factorised into fixed and varying matrices:

$$\begin{aligned} B(\rho )=\underbrace{\left[ \begin{array}{cccc} 0 &{} 0 &{} 0 \\ 0.01 &{} 0 &{} 0 \\ \hline 0 &{} 1 &{} 0 \\ 0 &{} 0 &{} 1 \\ \end{array} \right] }_{B_f}\underbrace{\left[ \begin{array}{ccc} 100 b_{31}(\rho ) &{} 100 b_{32}(\rho ) &{} 100 b_{33}(\rho ) \\ 0 &{} 0 &{} b_{23}(\rho ) \\ b_{41}(\rho ) &{} b_{42}(\rho ) &{} b_{43}(\rho ) \\ \end{array} \right] }_{E(\rho )} \end{aligned}$$
(8.61)

Note that the top portion of \(B_{f}\) corresponds to the \(B_1\) term in (8.5) which has been made small compared to the \(B_2\) term. In order to introduce a tracking facility, the plant states are augmented with the integral action states given by

$$\begin{aligned} \dot{x}_r(t)=r(t)-C_c\bar{x}(t) \end{aligned}$$
(8.62)

where r(t) is the command to be tracked, and \(C_c\) is the controlled output distribution matrix . The controlled outputs have been chosen as flight path angle (FPA) and \(\bar{V}_{tas}\), where \(\mathrm{{FPA}}=\bar{\theta }-\bar{\alpha }\). By defining new states as

$$ x_a(t)=\mathrm{{col}}(x_r(t),{\bar{x}}(t)) $$

the augmented system from (8.10) becomes

$$\begin{aligned} \dot{x}_a(t)=A_a(\rho )x_a(t)+B_{\nu _a}\nu (t)+B_rr(t) \end{aligned}$$
(8.63)

where

$$\begin{aligned} A_a(\rho ):=\left[ \begin{array}{cc} 0 &{} -C_c \\ 0 &{} A(\rho ) \\ \end{array} \right] ,\quad B_{\nu _a}:=\left[ \begin{array}{c} 0 \\ B_{\nu } \\ \end{array} \right] , \quad B_r=\left[ \begin{array}{c} I_l \\ 0 \\ \end{array} \right] \end{aligned}$$
(8.64)

which is used as the basis for the control law design. In the augmented system, the choice of G in (8.12) becomes \(G_a:=B_2(B_{f_a}^TB_{f_a})^{-1}B_{f_a}^T\) where

$$\begin{aligned} B_{f_a}=\left[ \begin{array}{c} 0 \\ B_f \\ \end{array} \right] \end{aligned}$$
(8.65)

8.3.1 Control Design Objectives

The tracking requirements for FPA and true air speed \(V_{tas}\) are decoupled responses, with settling times of 20 and 45 s respectively in the fault-free scenario. In the case of an elevator or horizontal stabiliser failure, the tracking requirement for \(V_{tas}\) remains unchanged (because speed is controlled by thrust) but for the FPA response, a settling time of 30 s is considered. In this example, a fixed gain matrix F is valid for the entire range of the LPV model . Note that designing a fixed matrix F, allows the MATLAB state-feedback synthesis code ‘msfsyn’ to be used to solve the LMIs (8.54)–(8.58). For designing the state feedback gain F, the Q and R matrices in (8.54) have been chosen as

$$Q=\mathrm{{diag}}(1.1,0.04,1,1,0.03,5) \quad \mathrm{{and}} \quad R=\mathrm{{diag}}(0.007,1.1)$$

where the first two states in the Q matrix are integral action states . The state feedback gain resulting from the optimisation is given by

$$\begin{aligned} F=\left[ \begin{array}{cccccc} -1.1161 &{} -2.3532 &{} -10.3807 &{} 3.8107 &{} 3.7409 &{} -1.3623 \\ -0.9891 &{} 0.0177 &{} 9.6902 &{} -4.9097 &{} -0.0222 &{} 3.3779 \\ \end{array} \right] \end{aligned}$$
(8.66)

In the nominal case, the engines are considered to be fault-free. The positive scalar from (8.23) has been chosen as \(\varepsilon =0.28\). It can then be shown (using a numerical search algorithm) that the maximum value of \(\Vert \tilde{\varPhi }(t,\rho )\Vert \) from Eq. (8.24) is 0.0673. To satisfy the closed-loop stability condition in (8.28), the value of \(\gamma _2\) associated with the operator in (8.27) should satisfy \(\gamma _2<\frac{\sqrt{\varepsilon }}{\gamma _1(\sqrt{\varepsilon }+c)}=14.8588\). The value associated with F in (8.66) is \(\gamma _2=11.0000\), and hence the stability condition in (8.28) is satisfied. During the simulations, the discontinuity associated with the nonlinear control term in (8.36) has been smoothed by using a sigmoidal approximation \(\frac{\sigma _a(t)}{\Vert \sigma _a(t)\Vert +\delta }\) , where \(\delta \) is a small positive scalar. This ensures a smooth and realistic control signal is sent to the actuators and allows extra design freedom especially when faults/failures occur. Here \(\delta \) has been chosen as \(\delta =0.01\). The adaptive gain parameters from (8.37) and (8.38) used in the simulation are: \(\eta =1, \varsigma _1=1, \varsigma _2=0.01\) and \(\varepsilon _0=0.01\). The control law in (8.53) requires information about the actuator effectiveness level matrix W(t), which can be estimated by some FDI scheme , as given in Sect. 3.3.1. As in the GARTEUR FM-AG16 project, in this chapter, it is assumed that a measurement of the actual actuator deflection is available, which is not an unrealistic assumption in modern aircraft systems. Information provided by the actual actuator deflection can be compared with the signals from the controller to indicate the effectiveness of the actuator.

8.3.2 Simulation Results

The manoeuvre considered in this chapter represents a change of altitude and speed using a series of \(-3\) deg FPA and \(-10\) m/s \(V_{tas}\) commands. This covers a wide range of the flight envelope highlighting the efficacy of the FTC scheme when dealing with faults and failures. In this chapter two failure scenarios will be considered, one is an elevator jam and the other is a stabiliser runaway . For consistency, all the actuator failures are set to occur at 300 s.

Remark 8.7

Note that even though the controller is designed based on the LPV model from Appendix A.1.1, it is tested on the full high fidelity nonlinear aircraft model used as a FTC benchmark in GARTEUR FM-AG16 project.

8.3.2.1 Elevator Jam

Figure 8.2 shows a comparison between the fault-free case and a scenario in which the elevator jams at 300 s. Despite the elevator jam, there is no visible difference in terms of the FPA and speed \(V_{tas}\) tracking performance . There is also no visible difference in terms of the altitude change between the failure and the fault-free case. It can be seen that immediately after the failure at 300 s, the estimate of the elevator effectiveness level drops to 10 %. This indicates non-perfect estimation (it should be zero). Despite this imperfection, there is no difference in terms of tracking performance . The plot of the norm of the switching function \(\Vert \sigma (t)\Vert \) also shows no visible difference between the fault-free and the failure case. Finally, a plot of the adaptive gain shows the variation of \(\kappa (t,x)\) defined in (8.37). Again, there is no visible difference in terms of the adaptive gain between the fault-free and failure case. Note that in the fault-free case, the variation of the adaptive gain is due to a combination of variations in \(\Vert \sigma (t)\Vert \) and the states \(\Vert x(t)\Vert \), as described in the formula in (8.38).

Fig. 8.2
figure 2

Elevator jam

Fig. 8.3
figure 3

Stabiliser runaway

8.3.2.2 Stabiliser Runaway

Figure 8.3 shows the results for the case when a stabiliser runaway occurs. The effect of the stabiliser runaway can be seen in the control surface plot where the stabiliser moves at a maximum rate to the maximum position of 3 deg. The effect of the control relocation can be seen in the plot of the elevator which moves to 7 deg immediately after the failure occurs at 300 s. Despite the stabiliser runaway and the imperfect estimation of the stabiliser effectiveness, there is no visible difference in terms of tracking performance between the fault-free and the failure case. (The estimated stabiliser effectiveness level is shown as 10 % whereas the actual value should be zero.) The plot of the norm of the switching function \(\Vert \sigma (t)\Vert \) shows the difference between the fault-free and the failure case. Here it can be seen that the norm for the failure case is slightly higher than the fault-free case immediately after the failure at 300 s, but is still relatively small. Finally, the plot of the adaptive gain shows there is a slight difference between the fault-free and the failure case.

8.4 Summary

This chapter described a FTC scheme for linear parameter varying systems. Integral sliding mode control in conjunction with CA was used to maintain nominal performance and robustness in the face of actuator faults or failures. The virtual control signal, generated by the integral sliding mode control law was translated into the physical actuator commands by using the control allocation scheme. The closed-loop stability of the system throughout the entire flight envelope was guaranteed—even in the event of total failure of a certain class of actuators (provided appropriate redundancy is available in the system). The scheme also takes into account imperfect estimation of actuator effectiveness levels and considers an adaptive gain for the nonlinear component of the control law. The FTC scheme has been tested on a full nonlinear aircraft benchmark model to highlight the efficacy of the scheme.

8.5 Notes and References

LPV methods have attracted much attention in recent years—especially for aircraft systems [2]. Using LPV techniques, guaranteed performance can be ensured over a wide range of operating regimes [3]. For LPV systems , several controller synthesis methods have been proposed in recent years in the framework of FTC: the advantages and capabilities of LPV controller synthesis (based on a single quadratic Lyapunov function approach) over gain-scheduling controller designs (based on \(\mathscr {H}_{\infty }\) controller synthesis) are discussed and compared in [4] by implementing the two techniques on a high fidelity atmospheric re-entry vehicle model. In [5], an output feedback synthesis method using LMIs is presented in order to preserve closed-loop stability in the case of multiple actuator faults . The authors in [6] have explored the combined use of fault estimation and fault compensation for LPV systems . Recently in [7] an active FTC technique was proposed for LPV systems to deal with actuator faults, in which the faults are identified by using an UIO technique, and a state feedback controller is realised by approximating the LPV system in a polytopic form . There is almost no literature on the use of sliding mode controllers for LPV systems with the exception of [811]. The work in [8, 9] has proposed SMC schemes for LPV systems—although not in the context of fault tolerant control. In [12] the nonlinear longitudinal model of the RECOVER transport aircraft was approximated by polynomially fitting the aerodynamics coefficients obtained from [13], to create an LPV representation using the function substitution method. In this chapter, the LPV plant matrices are taken from [12]. In [2] the same system is considered but only elevator failures (lock and float) are considered, whereas the FTC scheme described in this chapter is also tested by considering a stabiliser failure (as well as elevator failure scenarios).