Introduction

The development of embedded control for the transportation systems industry comes against problems of increased difficulty arising from the highly nonlinear dynamics of the considered systems and from the technical difficulties in measuring specific elements of the systems’ state vector [16]. Previous approaches to the control of turbocharged diesel engines and exhaust gas recirculation systems comprise PID and Lyapunov methods. One can also find results on neural and fuzzy control both for compression ignition (CI) and spark ignition (SI) engines [712]. In particular, control of turbocharged diesel engines is a complicated problem because the nonlinear model of the engine cannot be subjected to static feedback linearization [1316]. Therefore, one has to apply dynamic feedback linearization by extending the state vector of the system so as to include also as new state variables the derivatives of the initial control inputs. In comparison to systems which can be brought to a linear form through static feedback linearization, control of systems subjected to dynamic feedback linearization requires substantially different design stages [17]. By defining an extended state vector (which is an approach also known as dynamic extension), it is possible to show that the model of the turbocharged diesel engine is a differentially flat one. Moreover, by expressing all state variables and the control inputs of the model one can arrive at an equivalent description of the system in the linear canonical (Brunovsky) form.

In this paper the nonlinear control problem of the turbocharged diesel engines is solved with the use of differential flatness theory and of adaptive fuzzy control. Differential flatness theory is currently one of the main directions in the development of nonlinear control systems [1826]. It is shown that the dynamic model of the turbocharged Diesel engine is differentially flat and admits dynamic feedback linearization. It is also shown that the dynamic model can be written in the linear Brunovsky canonical form for which a state feedback controller can be easily designed. Moreover, to cope with modeling uncertainties and external disturbances the paper proposes the implement adaptive fuzzy control with the use of the transformed diesel engine’s dynamical model that was obtained through the application of differential flatness theory.

Actually, the paper proposes a solution to the problem of observer-based adaptive fuzzy control for diesel engines and in general for MIMO nonlinear dynamical systems that admit dynamic feedback linearization. The design of the adaptive fuzzy controller considers that only the system’s output is measured and that the system’s model is unknown. The control algorithm aims at satisfying the \(H_\infty \) tracking performance criterion, which means that the influence of the modeling errors and the external disturbances on the tracking error is attenuated to an arbitrary desirable level. After transforming the MIMO system into the canonical form, the resulting control inputs are shown to contain nonlinear elements which depend on the system’s parameters. Since the parameters of the system are unknown, then the nonlinear terms which appear in the control inputs have to be approximated with the use of neuro-fuzzy networks. Moreover, since only the system’s output is measurable the complete state vector has to be reconstructed with the use of a state observer. In the current paper it is shown that a suitable learning law can be defined for the aforementioned neuro-fuzzy approximators so as to preserve the closed-loop system stability. Lyapunov stability analysis proves also that the proposed observer-based adaptive fuzzy control scheme results in \(H_{\infty }\) tracking performance, in accordance to the results of [2731].

For the design of the observer-based adaptive fuzzy controller one has to solve two Riccati equations, where the first one is associated with the controller and the second one is associated with the observer. Parameters that affect the closed-loop robustness are: (i) the feedback gain vector K, (ii) the observer’s gain vector \(K_o\), (iii) the positive definite matrices \(P_1\) and \(P_2\) which stem from the solution of the two algebraic Riccati equations and which weigh the above mentioned observer and controller terms. The proposed control architecture guarantees that, the output of the closed-loop system will asymptotically track the desired trajectory and that \(H_\infty \) performance will be achieved. The efficiency of the proposed control method is tested through simulation experiments.

The structure of the paper is as follows: in “Dynamic Model of the Turbocharged Diesel Engine” section the dynamic model of the turbocharged diesel engine is analyzed. In “Nonlinear Control of the Diesel Engine using Lie Algebra” section it is shown how linearization and state feedback control for the diesel engine can be performed with the use of Lie algebra. In “Nonlinear Control of the Diesel Engine Using Differential Flatness Theory” section it is shown that the dynamic model of the turbocharged diesel engine is a differentially flat one, and that linearization and state feedback control can be applied with the use of differential flatness theory. In “Application of Flatness-Based Adaptive Fuzzy Control to the MIMO Diesel Engine Model” section the stages of differential flatness theory-based adaptive fuzzy control for the model of the diesel engine are explained. In “Lyapunov Stability Analysis” section Lyapunov stability analysis is provided for the adaptive fuzzy control loop that is implemented on the dynamical model of the turbocharged diesel engine. In “Simulation Tests” section simulation tests are carried out to evaluate the performance of the diesel engine control loop. Finally, in “Conclusions” section concluding remarks are stated.

Dynamic Model of the Turbocharged Diesel Engine

The basic parameters of the Diesel engine are: (i) Gas pressure in the intake manifold \(p_1\), (ii) Gas pressure in the exhaust manifold \(p_2\), (iii) Turbine power \(P_t\), (iv) Compressor power \(P_c\). Additional variables of importance are \(W_c\) which is the compressor mass flow rate, \(T_1\) the intake manifold temperature, \(T_2\) is the exhaust manifold temperature, \(W_t\) is the turbine mass flow rate and \(W_{EGR}\) is the exhaust gas recirculation flow rate (Fig. 1).

Fig. 1
figure 1

Diagram of the turbocharged Diesel engine

The basic relations of the diesel engine’s dynamics are:

$$\begin{aligned} \dot{p}_1= & {} {K_1}(W_c+u_1-{K_e}{p_1})+{\dot{T}_1 \over {T_1}}{p_1}\nonumber \\ \dot{p}_2= & {} {K_2}({K_e}{p_1}-{u_1}-{u_2})+{\dot{T}_2 \over {T_2}}{p_2} \\ \dot{P}_c= & {} {1 \over \tau }(\eta _m{P_t}-{P_c})\nonumber \end{aligned}$$
(1)

The control inputs to this model are the exhaust gas recirculation (EGR) flow rate \(u_1=W_{EGR}\) and the compressor mass flow rate \(u_2=W_t\). Moreover, it holds that

$$\begin{aligned} {W_c}= & {} {P_c}{{K_c} \over {p_1^\mu -1}} \end{aligned}$$
(2)
$$\begin{aligned} {P_t}= & {} {K_t}\left( 1-{p_2^{-\mu }}\right) {u_2} \end{aligned}$$
(3)

The model is simplified by setting \(\dot{T}_1=0\) and \(\dot{T}_2=0\). In such a case the associated state-space equations are given by

$$\begin{aligned} \dot{p}_1= & {} {K_1}(W_c+u_1-{K_e}{p_1}) \nonumber \\ \dot{p}_2= & {} {K_2}({K_e}{p_1}-{u_1}-{u_2}) \\ \dot{P}_c= & {} {1 \over \tau }({\eta _m}{P_t}-{P_c})\nonumber \end{aligned}$$
(4)

Moreover, it holds that

$$\begin{aligned} {W_c}= & {} {P_c}{{K_c} \over {p_1^{\mu }-1}} \end{aligned}$$
(5)
$$\begin{aligned} {P_t}= & {} {K_t}(1-{p_2}^{\mu }){u_2} \end{aligned}$$
(6)

The description of the diesel engine in state-space form is given by

$$\begin{aligned} \dot{x}=f(x)+{g_a}(x){u_1}+{g_b}(x){u_2} \end{aligned}$$
(7)

where

$$\begin{aligned} f(x)= & {} \begin{pmatrix} {K_1}{K_c}{{P_c} \over {p_1^{\mu }}}-{K_1}{K_e}{p_1} \\ {K_2}{K_e}{p_1} \\ -{{P_c} \over \tau } \end{pmatrix}\nonumber \\ g_a(x)= & {} \begin{pmatrix} K_1 \\ -{K_2} \\ 0 \end{pmatrix} \quad g_b(x)=\begin{pmatrix} 0 \\ -{K_2} \\ {K_o}(1-{p_2}^{-\mu }) \end{pmatrix} \end{aligned}$$
(8)

With respect to the control, the variables of the output are: (i) the input manifold pressure \(p_1\) and (ii) the compressor mass flow rate \(W_c\),

$$\begin{aligned} y=\begin{pmatrix} p_1 \\ W_c \end{pmatrix}= \begin{pmatrix} p_1 \\ {P_c}{{K_c} \over {p_1^{\mu }-1}} \end{pmatrix} \end{aligned}$$
(9)

Nonlinear Control of the Diesel Engine Using Lie Algebra

A Low Order Feedback Control Scheme

The previous definition of the system’s outputs given in Eq. (9) is used. The linearization of the Diesel engine dynamics is based on the following relations

$$\begin{aligned} \begin{aligned} {z_1^1}&={h_1}(x) \\ {z_2^1}&={L_f}{h_1}(x) \\ \dot{z}_1^1&={L_f}{h_1}(x)+{L_{g_a}}{h_1}(x){u_1}+{L_{g_b}}{h_1}(x){u_2}\\ {z_1^2}&={h_2}(x) \\ {z_2^2}&={L_f}{h_2}(x) \\ \dot{z}_1^2&={L_f}{h_2}(x)+{L_{g_a}}{h_2}(x){u_1}+{L_{g_b}}{h_2}(x){u_2} \end{aligned} \end{aligned}$$
(10)

It holds that

$$\begin{aligned} {z_1^1}(x)= & {} {p_1} \nonumber \\ {L_f}{h_1}(x)= & {} {{\partial {h_1}} \over {{\partial }{x_1}}}{f_1}+{{\partial {h_1}} \over {{\partial }{x_2}}}{f_2}+{{\partial {h_1}} \over {{\partial }{x_3}}}{f_3}{\Rightarrow }\nonumber \\ {L_f}{h_1}(x)= & {} 1{f_1}+0{f_2}+0{f_3}{\Rightarrow }{L_f}{h_1}(x)\nonumber \\= & {} {K_1}{K_c}{{P_c} \over {p_1^\mu -1}}-{K_1}{K_c}{p_1} \end{aligned}$$
(11)

Moreover, it holds that

$$\begin{aligned}&{L_{g_a}}{h_1}(x)={{{\partial }{h_1}} \over {{\partial }{x_1}}}{g_{a_1}}+{{{\partial }{h_1}} \over {{\partial }{x_2}}}{g_{a_2}}+{{{\partial }{h_1}} \over {{\partial }{x_3}}}{g_{a_3}}{\Rightarrow }\nonumber \\&{L_{g_a}}{h_1}(x)=1{g_{a_1}}+0{g_{a_2}}+0{g_{a_3}}\,{\Rightarrow }\,{L_g}{h_1}(x)={K_1} \end{aligned}$$
(12)

while it also holds that

$$\begin{aligned}&{L_{g_b}}{h_1}(x)={{{\partial }{h_1}} \over {{\partial }{x_1}}}{g_{b_1}}+{{{\partial }{h_1}} \over {{\partial }{x_2}}}{g_{b_2}}+{{{\partial }{h_1}} \over {{\partial }{x_3}}}{g_{b_3}}{\Rightarrow }\nonumber \\&{L_{g_b}}{h_1}(x)=1{g_{b_1}}+0{g_{b_2}}+0{g_{b_3}}\,{\Rightarrow }\,{L_g}{h_1}(x)=0 \end{aligned}$$
(13)

Equivalently it holds

$$\begin{aligned} {L_f}{h_2}(x)= & {} {{{\partial }{h_2}} \over {{\partial }{x_1}}}{f_1}+{{{\partial }{h_2}} \over {{\partial }{x_2}}}{f_2}+{{{\partial }{h_2}} \over {{\partial }{x_3}}}{f_3}{\Rightarrow }\nonumber \\ {L_f}{h_2}(x)= & {} {p_c}{{{-K_c}{\mu }{p_1^{\mu -1}}} \over {\pi _1^{\mu }-1}^2}\left( {K_1}{K_c}{{P_c} \over {p_1^{\mu }-1}-{K_1{K_e}{p_1}}}\right) \nonumber \\&+{{K_c} \over {\pi _1^{\mu }-1}}\left( -{P_c \over \tau }\right) \end{aligned}$$
(14)

Moreover, it holds

$$\begin{aligned} {L_{g_a}}{h_2}(x)= & {} {{{\partial }{h_2}} \over {{\partial }{x_1}}}{g_{a_1}}+{{{\partial }{h_2}} \over {{\partial }{x_2}}}{g_{a_2}}+{{{\partial }{h_2}} \over {{\partial }{x_3}}}{g_{a_3}}{\Rightarrow }\nonumber \\ {L_{g_a}}{h_2}(x)= & {} {p_c}{{{-K_c}{\mu }{p_1^{\mu -1}}} \over {\pi _1^{\mu }-1}^2}{g_{a_1}}+0{g_{a_2}}\nonumber \\&+{{K_c} \over {p_1^{\mu }-1}}{g_{a_3}}=0{\Rightarrow }\nonumber \\ {L_{g_a}}{h_2}(x)= & {} {P_c}{{{-K_c}{\mu }{p_1^{\mu -1}}} \over {p_1^{\mu }-1}^2}{K_1} \end{aligned}$$
(15)

Equivalently, one obtains

$$\begin{aligned} {L_{g_b}}{h_2}(x)= & {} {{{\partial }{h_2}} \over {{\partial }{x_1}}}{g_{b_1}}+{{{\partial }{h_2}} \over {{\partial }{x_2}}}{g_{b_2}}+{{{\partial }{h_2}} \over {{\partial }{x_3}}}{g_{b_3}}{\Rightarrow }\nonumber \\ {L_{g_b}}{h_2}(x)= & {} {P_c}{{{-K_c}{\mu }{p_1^{\mu -1}}} \over {p_1^{\mu }-1}^2}{g_{b_1}}+0{g_{b_2}}+{{K_c} \over {p_1^{\mu }-1}}{g_{b_3}}{\Rightarrow } \nonumber \\ {L_{g_b}}{h_2}(x)= & {} {{K_c} \over {p_1^{\mu }-1}}{K_o}(1-{p_2^{-\mu }}) \end{aligned}$$
(16)

From the relations

$$\begin{aligned} \dot{z}_1^1= & {} {L_f}{h_1}(x)+{L_{g_a}}{h_1}(x){u_1}+{L_{g_b}}{h_2}(x){u_2}\nonumber \\ \dot{z}_1^2= & {} {L_f}{h_2}(x)+{L_{g_a}}{h_2}(x){u_1}+{L_{g_b}}{h_2}(x){u_2} \end{aligned}$$
(17)

which is also written as

$$\begin{aligned}&\begin{pmatrix} \dot{z}_1^1 \\ \dot{z}_1^2 \end{pmatrix}\nonumber \\&\quad = \begin{pmatrix} {K_1}{K_c}{{P_c} \over {p_1^{\mu }-1}}-{K_1}{K_c}{P_1} \\ {p_c}{{{-K_c}{\mu }{p_1^{\mu -1}}} \over {\pi _1^{\mu }-1}^2}\left( {K_1}{K_c}{{P_c} \over {p_1^{\mu }-1}-{K_1{K_e}{p_1}}}\right) +{{K_c} \over {\pi _1^{\mu }-1}}\left( -{P_c \over \tau }\right) \end{pmatrix}\nonumber \\&\qquad +\begin{pmatrix} K_1 &{} 0 \\ {P_c}{{{-K_c}{\mu }{p_1^{\mu -1}}} \over {p_1^{\mu }-1}^2}{K_1} &{} {{K_c} \over {p_1^{\mu }-1}}{K_o}(1-{p_2^{-\mu }}) \end{pmatrix} \begin{pmatrix} u_1 \\ u_2 \end{pmatrix} \end{aligned}$$
(18)

Therefore, one arrives at a relation of the form

$$\begin{aligned} \dot{\tilde{z}}_1=\tilde{f}_a+{\tilde{M}_a}\tilde{u} \end{aligned}$$
(19)

The new control inputs can be defined as

$$\begin{aligned} v_1= & {} {L_f}{h_1}(x)+{L_{g_a}}{h_1}(x){u_1}+{L_{g_b}}{h_2}(x){u_2} \nonumber \\ v_2= & {} {L_f}{h_2}(x)+{L_{g_a}}{h_2}(x){u_1}+{L_{g_b}}{h_2}(x){u_2} \end{aligned}$$
(20)

one has the dynamics

$$\begin{aligned} \dot{z}_1^1= & {} {v_1} \nonumber \\ \dot{z}_1^2= & {} {v_2} \end{aligned}$$
(21)

In such a case the feedback control law results into

$$\begin{aligned} {v_1}= & {} \dot{z}_{1,d}^{1}-{K_p^1}(z_1^1-z_{1,d}^1) \nonumber \\ {v_2}= & {} \dot{z}_{1,d}^{2}-{K_p^2}(z_1^2-z_{1,d}^2) \end{aligned}$$
(22)

results in asymptotic elimination of the tracking error. Therefore, in that case the tracking error dynamics becomes

$$\begin{aligned} \dot{e}_1+{K_p^1}{e_1}= & {} 0\,{\Rightarrow }\,lim_{t{\rightarrow }\infty }{e_1}(t)=0, \ {K_p^1}>0 \nonumber \\ \dot{e}_2+{K_p^2}{e_2}= & {} 0\,{\Rightarrow }\,lim_{t{\rightarrow }\infty }{e_2}(t)=0, \ {K_p^2}>0 \end{aligned}$$
(23)

The previous results are confirmed through the computation of time derivatives and the use of differential flatness theory:

$$\begin{aligned} {y_1}={p_1}\,{\Rightarrow }\,\dot{y}_1=\dot{p}_1{\Rightarrow }\dot{y}_1={k_1}{k_c}{{p_c} \over {p_1^\mu -1}-{k_1}{k_e}{p_1}}+{k_1}{u_1}\nonumber \\ \end{aligned}$$
(24)

Similarly it holds

$$\begin{aligned} y_2= & {} {p_c}{{k_c} \over {p_1^\mu -1}}\,{\Rightarrow }\,{\dot{y}_2}={{{\partial }{y_2}} \over {{\partial }{x_1}}}\dot{x}_1+{{{\partial }{y_2}} \over {{\partial }{x_2}}}\dot{x}_2+{{{\partial }{y_2}} \over {{\partial }{x_3}}}\dot{x}_3{\Rightarrow }\nonumber \\ \dot{y}_2= & {} {p_c}{{-K_c}{\mu }{p_1^{{\mu -1}}} \over {(p_1^{\mu })-1}^2}\dot{x}_1+0\dot{x}_2+{{K_c} \over {p_1^{\mu }-1}}\dot{x}_3{\Rightarrow }\nonumber \\ \dot{y}_2= & {} {p_c}{{-K_c}{\mu }{p_1^{{\mu -1}}} \over {(p_1^{\mu })-1}^2}\left[ \left( {K_1}{K_c}{{P_c} \over {p_1^{\mu }-1}}-{K_1}{K_e}{p_1}\right) \right. \nonumber \\&+{K_1}{u_1}\Bigg ]+{{K_c} \over {p_1^{\mu }-1}}\left[ -{P_c \over \tau }+{K_o}(1-{p_2^{-\mu }}){u_2}\right] {\Rightarrow }\nonumber \\ \dot{y}_2= & {} {p_c}{{-K_c}{\mu }{p_1^{{\mu -1}}} \over {(p_1^{\mu })-1}^2}\left( {K_1}{K_c}{{P_c} \over {p_1^{\mu }-1}}-{K_1}{K_e}{p_1}\right) \nonumber \\&+{{K_c} \over {p_1^{\mu }-1}}\left( -{P_c \over \tau }\right) +{p_c}{{-K_c}{\mu }{p_1^{{\mu -1}}} \over {(p_1^{\mu })-1}^2}{K_1}{u_1}\nonumber \\&+{{K_c} \over {p_1^{\mu }-1}}{K_o}(1-{p_2^{-\mu }}){u_2} \end{aligned}$$
(25)

Therefore, one arrives again at a dynamics of the following form:

$$\begin{aligned}&\begin{pmatrix} \dot{y}_1 \\ \dot{y}_2 \end{pmatrix}\nonumber \\&= \begin{pmatrix} {K_1}{K_c}{{P_c} \over {p_1^{\mu }-1}}-{K_1}{K_c}{p_1} \\ {p_c}{{-K_c}{\mu }{p_1^{\mu -1}} \over {(p_1^{\mu }-1)}^2}\left( {K_1}{K_c}{{P_c} \over {p_1^{\mu }-1}}-{K_1}{K_c}{p_1}\right) +{{K_c} \over {p_1^{\mu -1}}}\left( -{P_c \over \tau }\right) \end{pmatrix}\nonumber \\&\quad +\,\begin{pmatrix} K_1 &{} \quad 0 \\ {p_c}{{-K_c}{\mu }{p_1^{\mu -1}} \over {(p_1^{\mu }-1)}^2}({K_1}) &{}\quad {{K_c} \over {p_1^{\mu }-1}}{K_o}(1-{p_2^{-\mu }}) \end{pmatrix} \begin{pmatrix} u_1 \\ u_2 \end{pmatrix} \end{aligned}$$
(26)

which finally gives

$$\begin{aligned} \dot{\tilde{y}}=\tilde{f}_a+{\tilde{M}_a}u \end{aligned}$$
(27)

A Dynamic Extension-Based Feedback Control Scheme

In the following, dynamic feedback linearization is performed which means that the control inputs that appear in the linearized form of the system are not only functions of the initial control inputs \(u_1,u_2\) but they also contain terms which are based on the derivatives \(\dot{u}_1,\dot{u}_2\). Thus, one has \(v_1=f_1(u_1,\dot{u}_1)\) and \(v_2=f_2(u_2,\dot{u}_2)\). Equivalently, this means that the control inputs \(u_1,u_2\) which are finally applied to the real system depend on \(v_1,v_2\) through an integration relation, that is \(v_1={f_1}(u_1,\dot{u}_1)\) and \(v_2={f_2}(u_2,\dot{u}_2)\).

The dynamical system of the diesel engine is written in an extended form using the variables \(v_1=\dot{u}_1=\dot{z}\), \(v_2=u_2\) which means \(u_1={\int }{v_1}dt\), \(u_2=v_2\). Thus, using Eq. (7) and Eq. (8) and by substituting \(u_1=z_1\) and \(\dot{z}=v_1\) as intermediate state variable it holds

$$\begin{aligned} \dot{p}_1= & {} {K_1}{K_c}{{P_c} {\over } {p_1^{\mu }-1}}-{K_1}{K_e}{p_1}+{K_1}z \nonumber \\ \dot{p}_2= & {} {K_2}{K_e}{p_1}-{K_2}z-{K_2}{v_2} \nonumber \\ \dot{P}_c= & {} -{{P_c} \over \tau }+{K_o}(1-{p_2}^{-\mu }) \nonumber \\ \dot{z}= & {} {v_1} \end{aligned}$$
(28)

therefore, by defining the state vector \(x=[x_1,x_2,x_3,x_4]^T=[p_1,p_2,P_c,z]^T\) the state-space description of the diesel engine model becomes

$$\begin{aligned} \dot{x}_1= & {} {K_1}{K_c}{{x_3} \over {x_1^{\mu }-1}}-{K_1}{K_e}{x_1}+{K_1}{x_4} \nonumber \\ \dot{x}_2= & {} {K_2}{K_e}{x_1}-{K_2}{x_4}-{K_2}{v_2} \nonumber \\ \dot{x}_3= & {} -{x_3 \over \tau }+{K_o}(1-{x_2}^{-\mu }) \nonumber \\ \dot{x}_4= & {} v_1 \end{aligned}$$
(29)

Consequently, in matrix form one has

$$\begin{aligned} \begin{pmatrix} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \\ \dot{x}_4 \end{pmatrix}= & {} \begin{pmatrix} {K_1}{K_c}{{x_3} \over {x_1^{\mu }-1}}-{K_1}{K_e}{x_1}+{K_1}{x_4} \\ {K_2}{K_e}{x_1}-{K_2}{x_4} \\ -{x_3 \over \tau }+{K_o}(1-x_2^{\mu }) \\ 0 \end{pmatrix}\nonumber \\&+ \begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \end{pmatrix}{v_1}+ \begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \end{pmatrix}{v_2} \end{aligned}$$
(30)

The system’s outputs are chosen to be

$$\begin{aligned} y_1= & {} x_1=p_1 \nonumber \\ y_2= & {} {P_c}{{K_c} \over {p_1^{\mu }-1}}\,{\Rightarrow }\,{y_2}={x_3}{{K_c} \over {x_1^{\mu }-1}} \end{aligned}$$
(31)

Linearization of the system is performed using the following relations-state variables

$$\begin{aligned} z_1^1= & {} h_1(x) \nonumber \\ z_2^1= & {} {L_f}{h_1}(x) \nonumber \\ \dot{z}_2^1= & {} {L_f^2}{h_1}(x)+{L_{g_a}}{L_f}{h_1}(x){u_1}+{L_{g_b}}{L_f}{h_1}(x){u_2} \end{aligned}$$
(32)

and equivalently

$$\begin{aligned} z_1^2= & {} h_2(x) \nonumber \\ z_2^2= & {} {L_f}{h_2}(x) \nonumber \\ \dot{z}_2^2= & {} {L_f^2}{h_2}(x)+{L_{g_a}}{L_f}{h_2}(x){u_1}+{L_{g_b}}{L_f}{h_2}(x){u_2} \end{aligned}$$
(33)

It holds that

$$\begin{aligned} z_1^1= & {} h_1(x)\,{\Rightarrow }\,z_1^1=x_1 \end{aligned}$$
(34)
$$\begin{aligned} z_2^1= & {} {L_f}h_1(x)\,{\Rightarrow }\,z_2^1={{{\partial }{h_1}} \over {{\partial }{x_1}}}{f_1}\nonumber \\&+{{{\partial }{h_1}} \over {{\partial }{x_2}}}{f_2}+{{{\partial }{h_1}} \over {{\partial }{x_3}}}{f_3}+{{{\partial }{h_1}} \over {{\partial }{x_4}}}{f_4}{\Rightarrow }\nonumber \\ z_2^1= & {} 1{\cdot }{f_1}+0{\cdot }{f_2}+0{\cdot }{f_3}+0{\cdot }{f_4}\,{\Rightarrow }\,{z_2^1}=f_1\,{\Rightarrow }\, \nonumber \\ z_2^1= & {} {K_1}{K_3}{{x_3} \over {x_1^p-1}}-{K_1}{K_e}{x_1}+{K_1}{x_4} \end{aligned}$$
(35)

Moreover, it holds that

$$\begin{aligned} {L_f^2}{h_1}(x)= & {} {L_f}{z_2^1}\,{\Rightarrow }\,{L_f^2}{h_1}(x)\nonumber \\= & {} {{{\partial }{z_2^1}} \over {{\partial }{x_1}}}{f_1}+{{{\partial }{z_2^1}} \over {{\partial }{x_1}}}{f_2}+{{{\partial }{z_2^1}} \over {{\partial }{x_3}}}{f_3}{\Rightarrow } \nonumber \\ {L_f^2}{h_1}(x)= & {} \left( {K_1}{K_3}{{{-x_3}{\mu }{x_1^{\mu -1}}} \over {x_1^{\mu }-1}}-{K_1}{K_e}\right) {f_1}\nonumber \\&+\,0{f_2}+\left( {{{K_1}{K_3}} \over {x_1^{\mu }-1}}\right) {f_3}+{K_1}{f_4}{\Rightarrow } \nonumber \\ {L_f^2}{h_1}(x)= & {} \left( {K_1}{K_3}{{{-x_3}{\mu }{x_1^{\mu -1}}} \over {x_1^{\mu }-1}}-{K_1}{K_e}\right) \left( {K_1}{K_3}{{x_3} \over {x_1^{\mu }-1}}\right. \nonumber \\&-\,{K_1}{K_e}{x_1}+{K_1}{x_4}\Bigg )+\left( {{{K_1}{K_3}} \over {x_1^{\mu }-1}}\right) \left( -{x_3 \over \tau }\nonumber \right. \\&\left. +\,{K_o}(1-x_2^{-\mu })\right) \end{aligned}$$
(36)

Equivalently, one computes

$$\begin{aligned} {L_{g_a}}{L_f}{h_1}(x)= & {} {L_{g_a}}{z_2^1}\,{\Rightarrow }\,{L_{g_a}}{L_f}{h_1}(x)={{{\partial }{z_2^1}} \over {{\partial }{x_1}}}{g_{a_1}}+{{{\partial }{z_2^1}} \over {{\partial }{x_2}}}{g_{a_2}}\nonumber \\&+{{{\partial }{z_2^1}} \over {{\partial }{x_3}}}{g_{a_3}}+{{{\partial }{z_2^1}} \over {{\partial }{x_4}}}{g_{a_4}}{\Rightarrow } \nonumber \\ {L_{g_a}}{L_f}{h_1}(x)= & {} \left( {K_1}{K_3}{{{-x_3}{\mu }{x_1^{\mu -1}}} \over {x_1^{\mu }-1}}-{K_1}{K_e}\right) {g_{a_1}}\nonumber \\&+\,0{g_{a_2}}+\left( {{{K_1}{K_3}} \over {x_1^{\mu }-1}}\right) {g_{a_3}}+{K_1}{g_{a_4}}{\Rightarrow } \nonumber \\ {L_{g_a}}{L_f}{h_1}(x)= & {} {K_1} \end{aligned}$$
(37)

and in a similar manner one obtains

$$\begin{aligned} {L_{g_b}}{L_f}{h_1}(x)= & {} {L_{g_b}}{z_2^1}\,{\Rightarrow }\,{L_{g_b}}{L_f}{h_1}(x)={{{\partial }{z_2^1}} \over {{\partial }{x_1}}}{g_{b_1}}\nonumber \\&+{{{\partial }{z_2^1}} \over {{\partial }{x_2}}}{g_{b_2}}+{{{\partial }{z_2^1}} \over {{\partial }{x_3}}}{g_{b_3}}+{{{\partial }{z_2^1}} \over {{\partial }{x_4}}}{g_{b_4}}{\Rightarrow }\nonumber \\ {L_{g_b}}{L_f}{h_1}(x)= & {} \left( {K_1}{K_3}{{{-x_3}{\mu }{x_1^{\mu -1}}} \over {x_1^{\mu }-1}}-{K_1}{K_e}\right) {g_{b_1}}+0{g_{b_2}}\nonumber \\&+\left( {{{K_1}{K_3}} \over {x_1^{\mu }-1}}\right) {g_{b_3}}+{K_1}{g_{b_4}}{\Rightarrow } \nonumber \\ {L_{g_b}}{L_f}{h_1}(x)= & {} 0 \end{aligned}$$
(38)

Following an equivalent procedure one computes

$$\begin{aligned} {z_2^2}= & {} {L_f}{h_2}(x)={x_3}{{-K_c}{\mu }{x_1^{\mu -1}} \over {x_1^{\mu }-1}^2}\left( {K_1}{K_c}{{x_3} \over {x_1^{\mu }}-1}-{K_1}{K_c}{x_1}\right. \nonumber \\&+{K_1}{x_4}\Bigg )+{{K_c} \over {x_1^{\mu }-1}}\left( -{x_3 \over \tau }+{K_o}(1-x_2^{-mu})\right) \end{aligned}$$
(39)

In a similar manner one gets

$$\begin{aligned}&{L_f^2}{h_2}(x)={{{\partial }{z_2^2}} \over {{\partial }{x_1}}}{f_1}+{{{\partial }{z_2^2}} \over {{\partial }{x_2}}}{f_2}+ {{{\partial }{z_2^2}} \over {{\partial }{x_3}}}{f_3}+{{{\partial }{z_2^2}} \over {{\partial }{x_4}}}{f_4}{\Rightarrow } \nonumber \\&{L_f^2}{h_2}(x)\nonumber \\&=\left\{ {x_3}{{({{-K_c}\mu {\mu -1}x_1^{\mu -2}})(x_1^{\mu }-1)^2-(-{K_c}{\mu }{x_1^{\mu -1}})2(x_1^{\mu }-1){\mu }{x_1^{\mu -1}}} \over {x_1^{\mu }-1}^4}\right. \nonumber \\&\quad \times \left. \left( {{K_1}{K_c}{x_3} \over {x_1^\mu -1}-{K_1}{K_c}+{K_1}{x_4}}\right) \right. \nonumber \\&\quad \left. +\,{x_3}{{{-K_c}{\mu }{x_1^{\mu -1}}} \over {x_1^{\mu }-1}}\left( {{K_1}{K_c}{-x_3}{\mu }{x_1^{\mu -1}} \over {(x_1^{\mu }-1)}^2}-{K_1}{K_c}\right) \right. \nonumber \\&\quad \left. +\,{{{-Kc}{\mu }{x_1^{\mu -1}}} \over {x_1^{\mu }-1}^2}\left( -{{x_3 \over \tau }+{K_o}(1-x_2^{-\mu })}\right) \right\} \nonumber \\&\quad \times \left( {K_1}{K_c}{{x_3} \over {x_1^{\mu }-1}}-{K_1}{K_e}{x_1}+{K_1}{x_4}\right) \nonumber \\&\quad +\,{{K_c} \over {x_1^{\mu }-1}}{K_o}{\mu }{x_2^{-\mu -1}}({K_2}{K_e}{x_1}-{K_2}{x_4})\nonumber \\&\quad +\,\left\{ {x_3}{{{-K_c}{\mu }{x_1^{\mu -1}}} \over {x_1^{\mu }-1}^2}{{{K_1}{K_c}} \over {x_1^{\mu }-1}}+{{K_c} \over {x_1^{\mu }-1}}(-{1 \over \tau })\right\} \nonumber \\&\quad \times \left( -{x_3 \over \tau }+{K_o}(1-x_2^{-\mu })\right) +\left\{ {x_3}{{{-K_c}{\mu }{x_1^{\mu -1}}} \over {x_1^{\mu }-1}^2}{K_1}\right\} 0 \end{aligned}$$
(40)

Equivalently, one computes

$$\begin{aligned} {L_{g_a}}{L_f}{h_2}(x)= & {} {{{\partial }{z_2^2}} \over {{\partial }{x_1}}}{g_{a_1}}+{{{\partial }{z_2^2}} \over {{\partial }{x_2}}}{g_{a_2}}+{{{\partial }{z_2^2}} \over {{\partial }{x_3}}}{g_{a_3}}+{{{\partial }{z_2^2}} \over {{\partial }{x_4}}}{g_{a_4}}{\Rightarrow }\nonumber \\ {L_{g_a}}{L_f}{h_2}(x)= & {} {{{\partial }{z_2^2}} \over {{\partial }{x_4}}}1\,{\Rightarrow }\,{L_{g_a}}{L_f}{h_2}(x)={x_3}{{{-K_c}{\mu }{x_1^{\mu -1}}} \over {(x_1^{\mu }-1}^2)}{K_1}\nonumber \\ \end{aligned}$$
(41)

and similarly

$$\begin{aligned} {L_{g_b}}{L_f}{h_2}(x)= & {} {{{\partial }{z_2^2}} \over {{\partial }{x_1}}}{g_{b_1}}+{{{\partial }{z_2^2}} \over {{\partial }{x_2}}}{g_{b_2}}+{{{\partial }{z_2^2}} \over {{\partial }{x_3}}}{g_{b_3}}+{{{\partial }{z_2^2}} \over {{\partial }{x_4}}}{g_{b_4}}{\Rightarrow }\nonumber \\ {L_{g_b}}{L_f}{h_2}(x)= & {} {{{\partial }{z_2^2}} \over {{\partial }{x_2}}}1\,{\Rightarrow }\,{L_{g_b}}{L_f}{h_2}(x)={{K_c} \over {x_1^{\mu }-1}}{K_o}{\mu }{x_2^{-\mu -1}}\nonumber \\ \end{aligned}$$
(42)

Consequently, after the change of coordinates one has the following description

$$\begin{aligned} {\dot{z}_1^1}= & {} z_2^1 \nonumber \\ {\dot{z}_2^1}= & {} {L_f^2}{h_1}(x)+{L_{g_a}}{L_f}{h_1}(x){v_1}+{L_{g_b}}{L_f}{h_1}(x){v_2}\nonumber \\ {\dot{z}_1^2}= & {} z_2^2 \nonumber \\ {\dot{z}_2^2}= & {} {L_f^2}{h_2}(x)+{L_{g_a}}{L_f}{h_2}(x){v_1}+{L_{g_b}}{L_f}{h_2}(x){v_2} \end{aligned}$$
(43)

Consequently, the system of the diesel engine is written in the following form. This also takes the following matrix description

$$\begin{aligned} \begin{pmatrix} \ddot{z}_1\\ \ddot{z}_2 \end{pmatrix}= \begin{pmatrix} {L_f^2}{h_1}(x) \\ {L_f^2}{h_2}(x) \end{pmatrix}+ \begin{pmatrix} {L_{g_a}}{L_f}{h_1}(x) &{} \quad {L_{g_b}}{L_f}{h_1}(x) \\ {L_{g_a}}{L_f}{h_2}(x) &{} \quad {L_{g_b}}{L_f}{h_2}(x) \end{pmatrix} \begin{pmatrix} v_1 \\ v_2 \end{pmatrix}\nonumber \\ \end{aligned}$$
(44)

that is

$$\begin{aligned} {\ddot{\tilde{z}}}={\tilde{f}}_a+{\tilde{M}}_{a}v \end{aligned}$$
(45)

Moreover, by defining the new control inputs one has

$$\begin{aligned} {v_{in}^1}= & {} {L_f^2}{h_1}(x)+{L_{g_a}}{L_f}{h_1}(x){v_1}+{L_{g_b}}{L_f}{h_1}(x){v_2} \nonumber \\ {v_{in}^2}= & {} {L_f^2}{h_2}(x)+{L_{g_b}}{L_f}{h_2}(x){v_1}+{L_{g_b}}{L_f}{h_2}(x){v_2} \end{aligned}$$
(46)

one has \(\dot{z}_1^1=z_2^1\), \(\dot{z}_2^1=v_{in}^1\), \(\dot{z}_1^2=z_2^2\) and \(\dot{z}_2^2=v_{in}^2\), the system’s dynamics is written in the canonical Brunovsky form. Moreover, the states-space description of the motor becomes

$$\begin{aligned} \begin{pmatrix} \dot{z}_1^1 \\ \dot{z}_2^1 \\ \dot{z}_1^2 \\ \dot{z}_2^2 \end{pmatrix}= \begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \end{pmatrix} \begin{pmatrix} z_1^1 \\ z_1^2 \\ z_1^2 \\ z_2^2 \end{pmatrix}+ \begin{pmatrix} 0 &{}\quad 0 \\ 1 &{}\quad 0 \\ 0 &{}\quad 0 \\ 0 &{}\quad 1 \end{pmatrix} \begin{pmatrix} v_{in}^1 \\ v_{in}^2 \end{pmatrix} \end{aligned}$$
(47)

The selection of the state feedback control law, which assures zeroing of the tracking error is

$$\begin{aligned} {v_{in}^1}= & {} {\ddot{z}_{1,d}^1}-{K_d^1}(\dot{z}_1^1-\dot{z}_{1,d}^1)-{K_p^1}(z_1^1-z_{1,d}^1) \nonumber \\ {v_{in}^2}= & {} {\ddot{z}_{1,d}^2}-{K_d^2}(\dot{z}_1^2-\dot{z}_{1,d}^2)-{K_p^2}(z_1^2-z_{1,d}^2) \end{aligned}$$
(48)

Since

$$\begin{aligned} {\tilde{v}}_{in}={\tilde{f}}_{a}+{\tilde{M}}_{a}{\tilde{v}}\,{\Rightarrow }\,{\tilde{v}} ={\tilde{M}}_{a}^{-1}({\tilde{v}}_{in}-{\tilde{f}}_a) \end{aligned}$$
(49)

Moreover, because it holds

$$\begin{aligned} \tilde{v}= \begin{pmatrix} v_1 \\ v_2 \end{pmatrix}= \begin{pmatrix} \dot{u}_1 \\ u_2 \end{pmatrix} \end{aligned}$$
(50)

to compute the control input \(u_1\) which is actually applied to the diesel motor an integration with respect to time has to be carried out for the auxiliary control input.

Nonlinear Control of the Diesel Engine Using Differential Flatness Theory

The results about dynamic state feedback system linearization can be confirmed with the computation of time derivatives and differential flatness theory. The following differentially flat system outputs are considered

$$\begin{aligned} y_1= & {} p_1=x_1\nonumber \\ y_2= & {} {P_c}{{K_c} \over {p_1^{\mu }-1}}\,{\Rightarrow }\,{y_2}={x_3}{{K_c} \over {x_1^\mu -1}} \end{aligned}$$
(51)

The dynamics of the extended system is

$$\begin{aligned} \begin{pmatrix} \dot{x}_1 \\ \dot{x}_2 \\ \dot{x}_3 \\ \dot{x}_4 \end{pmatrix}= & {} \begin{pmatrix} {K_1}{K_c}{{x_3} \over {x_1^\mu -1}}-{K_1}{K_e}{x_1}+{K_1}{x_4} \\ {K_2}{K_e}{x_1}-{K_2}{x_4} \\ {-{x_3} \over \tau }+{K_o}(1-x_2^{\mu }) \\ 0 \end{pmatrix}\nonumber \\&+ \begin{pmatrix} 0 \\ 0 \\ 0 \\ 1 \end{pmatrix}{v_1}+ \begin{pmatrix} 0 \\ 1 \\ 0 \\ 0 \end{pmatrix}{v_2} \end{aligned}$$
(52)

It holds that \(y_1=x_1\) therefore

$$\begin{aligned} x_1= & {} {q_1}(y,\dot{y}) \nonumber \\ y_2= & {} {x_3}{{K_c} \over {x_1^{\mu }-1}}\,{\Rightarrow }\,{x_3}={{{y_2}(x_1^{\mu }-1)} \over {K_c}}\,{\Rightarrow }\,{x_3}\nonumber \\= & {} {{{y_2}(y_1^{\mu }-1)} \over {K_c}}{\Rightarrow }{x_3} \end{aligned}$$
(53)

therefore \(x_3={q_3}(y,\dot{y})\) and thus variable \(x_3\) is a function of the flat output and its derivatives. From the first row of the state-space equations one has

$$\begin{aligned} \dot{x}_1= & {} {K_1}{K_c}{{x_3} \over {x_1^{\mu }-1}}-{K_1}{K_e}{x_1}+{K_1}{x_4}{\Rightarrow } \nonumber \\ x_4= & {} {{{\dot{x}_1}-{K_1}{K_e}{x_3 \over {x_1^{\mu }}-1}+{K_1}{K_e}{x_1}} \over {K_1}}{\Rightarrow } \nonumber \\ x_4= & {} q_4(y,\dot{y}) \end{aligned}$$
(54)

From the 4th row of the states-space equations one has

$$\begin{aligned} \dot{x}_4=v_1\,{\Rightarrow }\,{v_1}={q_5}(y,\dot{y}) \end{aligned}$$
(55)

where \(v_1\) is a function of the flat output and its derivatives. From the 3rd row of the states-space equations one has

$$\begin{aligned} \dot{x}_3= & {} -{{x_3} \over \tau }+{K_o}(1-x_2^{-\mu }){\Rightarrow }\nonumber \\ x_2^{-\mu }= & {} -{{\dot{x}_3+{x_3 \over \tau }} \over {K_o}}\,{\Rightarrow }\,{x_2}=\left( {{-\dot{x}_3+{x_3 \over \tau }} \over {K_o}}^{\mu }\right) \end{aligned}$$
(56)

From the second row of the state-space equations one obtains

$$\begin{aligned} \dot{x}_2= & {} {K_2}{K_e}{x_1}-{K_2}{x_4}+{v_2}{\Rightarrow } \nonumber \\ v_2= & {} \dot{x}_2-{K_2}{K_e}{x_1}+{K_2}{x_4}{\Rightarrow } \nonumber \\ v_2= & {} {q_6}(y,\dot{y}) \end{aligned}$$
(57)

Therefore, all state variables of the system and the control inputs can be written as functions of the flat output and its derivatives. Therefore, the system of the diesel engine is differentially flat. This system can be subjected to dynamic feedback linearization. By considering the flat outputs

$$\begin{aligned} y_1= & {} x_1 \nonumber \\ y_2= & {} {x_3}{{K_c} \over {x_1^{\mu }-1}} \end{aligned}$$
(58)

and by differentiating with respect to time one obtains the linearized model of the system

$$\begin{aligned} y_1= & {} x_1 \nonumber \\ \dot{y}_1= & {} \dot{x}_1\,{\Rightarrow }\,\dot{y}_1={K_1}{K_c}{{x_3} \over {x_1^{\mu }-1}}\nonumber \\&-{K_1}{K_c}{x_1}+{K_1}{x_4} \end{aligned}$$
(59)

and equivalently

$$\begin{aligned} \ddot{y}_1= & {} \left( {K_1}{K_c}{{{-x_3}{\mu }{x_1^{\mu -1}}} \over {x_1^{\mu }-1}^2}-{K_1}{K_c}\right) \dot{x}_1+0\dot{x}_2\nonumber \\&+\left( {K_1}{K_c}{1 \over {x_1^{\mu }-1}}\right) \dot{x}_3+{K_1}\dot{x}_4 \end{aligned}$$
(60)

or

$$\begin{aligned} \ddot{y}_1= & {} \left( {K_1}{K_c}{{{-x_3}{\mu }{x_1^{\mu -1}}} \over {x_1^{\mu }-1}^2}-{K_1}{K_c}\right) \left( {K_1}{K_c}{{x_3} \over {x_1^{\mu }-1}}\right. \nonumber \\&-{K_1}{K_e}{x_1}+{K_1}{x_4}\Bigg )+\left( {K_1}{K_c}{1 \over {x_1^{\mu }-1}}\right) \nonumber \\&\times \left( -{{x_3 \over \tau }}+{K_o}(1-x_2^{-\mu })\right) +{K_1}v_1 \end{aligned}$$
(61)

Equivalently, one has

$$\begin{aligned} y_2= & {} {x_3}{{K_c} \over {x_1^\mu -1}} \nonumber \\ \dot{y}_2= & {} {{{x_3}{K_c}{\mu }{x_1^{{\mu -1}}}} \over {x_1^{\mu }-1}^2}\dot{x}_1+{{K_c} \over {x_1^{\mu }-1}}\dot{x}_3{\Rightarrow } \nonumber \\ \dot{y}_2= & {} {{{x_3}{K_c}{\mu }{x_1^{\mu -1}}} \over {x_1^{\mu }-1}^2}\dot{x}_1+{{K_c} \over {x_1^{\mu }-1}}\dot{x}_3{\Rightarrow } \nonumber \\ \dot{y}_2= & {} {{{x_3}{K_c}{\mu }{x_1^{{\mu -1}}}} \over {x_1^{\mu }-1}^2}\left( {K_1}{K_c}{{x_3} \over {x_1^{\mu }}-1}-{K_1}{K_e}{x_1}+{K_1}{x_4}\right) \nonumber \\&+{{K_c} \over {x_1^{\mu }-1}}\left( -{x_3 \over \tau }+{K_o}(1-x_2^{-\mu })\right) \end{aligned}$$
(62)

and equivalently it holds

$$\begin{aligned} \ddot{y}_2={{{\partial }\dot{y}_2} \over {{\partial }{x_1}}}\dot{x}_1+{{{\partial }\dot{y}_2} \over {{\partial }{x_2}}}\dot{x}_2+{{{\partial }\dot{y}_2} \over {{\partial }{x_3}}}\dot{x}_3+{{{\partial }\dot{y}_2} \over {{\partial }{x_4}}}\dot{x}_4{\Rightarrow } \end{aligned}$$
(63)

or

$$\begin{aligned}&\ddot{y}_2\nonumber \\&=\left\{ {{{x_3}{K_c}{\mu }{(\mu -1)}{x_1^{\mu -2}}(x_1^{\mu }-1)^2-{x_3}{K_c}{\mu }{x_1^{\mu -1}}2(x_1^{\mu }-1){\mu }{x_1^{\mu -1}}} \over {(x_1^{\mu }-1)^4}}\right. \nonumber \\&\quad \times \left. \left( {K_1}{K_c}{{x_3} \over {x_1^{\mu }-1}}-{K_1}{K_e}{x_1}+{K_1}{x_4}\right) \right. \nonumber \\&\quad \left. +\,{{-K_c}{\mu }{x_1^{\mu -1}} \over {(x_1^{\mu }-1)}^2}\left( -{x_3 \over \tau }+{K_o}(1-x_2^{\mu })\right) \right\} \dot{x}_1\nonumber \\&\quad +\,{{K_c} \over {x_1^{\mu }-1}}({K_o}{\mu }{x_2^{\mu -1}})\dot{x}_2\nonumber \\&\quad +\,\left\{ {{{K_c}{\mu }{x_1^{\mu -1}}} \over {(x_1^{\mu }-1)}^2}\left( {K_1}{K_c}{{x_3} \over {x_1^{\mu }-1}}\right) -{K_1}{K_e}{x_1}\right. \nonumber \\&\quad \left. +\,{K_1}{x_4}+{{K_c} \over {x_1^{\mu }-1}}\left( -{1 \over \tau }\right) \right\} \dot{x}_3\nonumber \\&\quad +\,{{{x_3}{K_c}{\mu }{x_1^{\mu -1}}} \over {x_1^{\mu }-1}^2}{K_1}\dot{x}_4 \end{aligned}$$
(64)

Therefore, it holds

$$\begin{aligned}&\ddot{y}_2\nonumber \\&=\left\{ {{{x_3}{K_c}{\mu }{(\mu -1)}{x_1^{\mu -2}}(x_1^{\mu }-1)^2-{x_3}{K_c}{\mu }{x_1^{\mu -1}}2(x_1^{\mu }-1){\mu }{x_1^{\mu -1}}} \over {(x_1^{\mu }-1)^4}}\right. \nonumber \\&\quad \times \left. \left( {K_1}{K_c}{{x_3} \over {x_1^{\mu }-1}}-{K_1}{K_e}{x_1}+{K_1}{x_4}\right) \right. \nonumber \\&\quad \left. +\,{{-K_c}{\mu }{x_1^{\mu -1}} \over {(x_1^{\mu }-1)}^2}\left( -{x_3 \over \tau }+{K_o}(1-x_2^{\mu })\right) \right\} \left( {K_1}{K_c}{{x_3} \over {x_1^{\mu }-1}}\right. \nonumber \\&\quad -\,{K_1}{K_e}{x_1}+{K_1}{x_4}\Bigg )+{{K_c} \over {x_1^{\mu }-1}}({K_o}{\mu }{x_2^{\mu -1}})({K_2}{K_e}{x_1}\nonumber \\&\quad -\,{K_2}{x_4}+v_2)+\left\{ {{{K_c}{\mu }{x_1^{\mu -1}}} \over {(x_1^{\mu }-1)}^2}\left( {K_1}{K_c}{{x_3} \over {x_1^{\mu }-1}}\right) -{K_1}{K_e}{x_1}\right. \nonumber \\&\quad +\,{K_1}{x_4}+{{K_c} \over {x_1^{\mu }-1}}\left( -{1 \over \tau }\right) \Bigg \}\left( -{x_3 \over \tau }+{K_o}(1-x_2^{-\mu })\right) \nonumber \\&\quad +\,{{{x_3}{K_c}{\mu }{x_1^{\mu -1}}} \over {x_1^{\mu }-1}^2}{K_1}\}{v_1} \end{aligned}$$
(65)

Consequently, in complete analogy to the relations computed with the use of Lie algebra it holds

$$\begin{aligned} \ddot{y}_2={L_f^2}{h_2}(x)+{L_{g_a}}{L_f}{h_2}{v_1}+{L_{g_b}}{L_f}{h_2}{v_2} \end{aligned}$$
(66)

where

$$\begin{aligned}&{L_f^2}{h_2}(x)\nonumber \\&=\left\{ {{{x_3}{K_c}{\mu }{(\mu -1)}{x_1^{\mu -2}}(x_1^{\mu }-1)^2-{x_3}{K_c}{\mu }{x_1^{\mu -1}}2(x_1^{\mu }-1){\mu }{x_1^{\mu -1}}} \over {(x_1^{\mu }-1)^4}}\right. \nonumber \\&\quad \left. \times \left( {K_1}{K_c}{{x_3} \over {x_1^{\mu }-1}}-{K_1}{K_e}{x_1}+{K_1}{x_4}\right) \right. \nonumber \\&\quad \left. +\,{{-K_c}{\mu }{x_1^{\mu -1}} \over {(x_1^{\mu }-1)}^2}\Bigg (-{x_3 \over \tau }+{K_o}(1-x_2^{\mu })\Bigg )\right\} \Bigg ({K_1}{K_c}{{x_3} \over {x_1^{\mu }-1}}\nonumber \\&\quad -\,{K_1}{K_e}{x_1}+{K_1}{x_4}\Bigg )+{{K_c} \over {x_1^{\mu }-1}}({K_o}{\mu }{x_2^{\mu -1}})({K_2}{K_e}{x_1}\nonumber \\&\quad -\,{K_2}{x_4})+ \left\{ {{{K_c}{\mu }{x_1^{\mu -1}}} \over {(x_1^{\mu }-1)}^2}\left( {K_1}{K_c}{{x_3} \over {x_1^{\mu }-1}}\right) -{K_1}{K_e}{x_1}\right. \nonumber \\&\quad \left. +\,{K_1}{x_4}+{{K_c} \over {x_1^{\mu }-1}}\left( -{1 \over \tau }\right) \right\} \left( -{x_3 \over \tau }+{K_o}(1-x_2^{-\mu })\right) \end{aligned}$$
(67)

and also

$$\begin{aligned} {L_{g_a}}{L_f}{h_2}(x)= & {} {{{x_3}{K_c}{\mu }{x_1^{\mu -1}}} \over {(x_1^{\mu }-1})^2}{K_1} \end{aligned}$$
(68)
$$\begin{aligned} {L_{g_b}}{L_f}{h_2}(x)= & {} {{K_c} \over {x_1^{\mu }-1}}({K_o}{\mu }{x_2^{\mu -1}}) \end{aligned}$$
(69)

Therefore, one is led again in a description of the system in the following form

$$\begin{aligned} \ddot{y}_1= & {} {L_f^2}{h_1}(x)+{L_{g_a}}{L_f}{h_1}(x){u_1}+{L_{g_b}}{L_f}{h_1}(x){u_2} \nonumber \\ \ddot{y}_2= & {} {L_f^2}{h_2}(x)+{L_{g_a}}{L_f}{h_2}(x){u_1}+{L_{g_b}}{L_f}{h_2}(x){u_2} \end{aligned}$$
(70)

Again, by defining the new control inputs one has

$$\begin{aligned} {v_{in}^1}= & {} {L_f^2}{h_1}(x)+{L_{g_a}}{L_f}{h_1}(x){v_1}+{L_{g_b}}{L_f}{h_1}(x){v_2} \nonumber \\ {v_{in}^2}= & {} {L_f^2}{h_2}(x)+{L_{g_b}}{L_f}{h_2}(x){v_1}+{L_{g_b}}{L_f}{h_2}(x){v_2} \end{aligned}$$
(71)

one has \(\dot{z}_1^1=z_2^1\), \(\dot{z}_2^1=v_{in}^1\), \(\dot{z}_1^2=z_2^2\) and \(\dot{z}_2^2=v_{in}^2\), the system’s dynamics is written in the canonical Brunovsky form. Moreover, the state-space description of the motor becomes

$$\begin{aligned} \begin{pmatrix} \dot{z}_1^1 \\ \dot{z}_2^1 \\ \dot{z}_1^2 \\ \dot{z}_2^2 \end{pmatrix}= \begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \end{pmatrix} \begin{pmatrix} z_1^1 \\ z_1^2 \\ z_1^2 \\ z_2^2 \end{pmatrix}+ \begin{pmatrix} 0 &{}\quad 0 \\ 1 &{}\quad 0 \\ 0 &{}\quad 0 \\ 0 &{}\quad 1 \end{pmatrix} \begin{pmatrix} v_{in}^1 \\ v_{in}^2 \end{pmatrix} \end{aligned}$$
(72)

The design of a feedback controller for the above state-space model is as analyzed in the case of linearization with the use of Differential Geometry (computation of Lie derivatives).

The selection of the state feedback control law, which assures zeroing of the tracking error is

$$\begin{aligned} {v_{in}^1}= & {} {\ddot{z}_{1,d}^1}-{K_d^1}(\dot{z}_1^1-\dot{z}_{1,d}^1)-{K_p^1}(z_1^1-z_{1,d}^1) \nonumber \\ {v_{in}^2}= & {} {\ddot{z}_{1,d}^2}-{K_d^2}(\dot{z}_1^2-\dot{z}_{1,d}^2)-{K_p^2}(z_1^2-z_{1,d}^2) \end{aligned}$$
(73)

Since

$$\begin{aligned} \tilde{v}_{NN}={\tilde{f}_a}+{\tilde{M}_a}\tilde{v}\,{\Rightarrow }\,\tilde{v} ={\tilde{M}_a^{-1}}(\tilde{v}_{in}-\tilde{f}_a) \end{aligned}$$
(74)

Moreover, because it holds

$$\begin{aligned} \tilde{v}= \begin{pmatrix} v_1 \\ v_2 \end{pmatrix}= \begin{pmatrix} \dot{u}_1 \\ u_2 \end{pmatrix} \end{aligned}$$
(75)

to compute the control input \(u_1\) which is actually applied to the diesel motor an integration with respect to time has to be carried out for the auxiliary control input.

Flatness-Based Adaptive Fuzzy Control for MIMO Nonlinear Systems

Transformation of MIMO Nonlinear Systems into the Brunovsky Form

It is assumed now that after defining the flat outputs of the initial MIMO nonlinear system, and after expressing the system state variables and control inputs as functions of the flat output and of the associated derivatives, the system can be transformed in the Brunovsky canonical form [32]:

$$\begin{aligned} \begin{aligned}&\dot{x}_1=x_2 \\&\dot{x}_2=x_3 \\&\cdots \\&\dot{x}_{r_1-1}=x_{r_1} \\&\dot{x}_{r_1}=f_1(x)+{\sum _{j=1}^p}{g_{1_j}(x)}u_j+d_1\\&\dot{x}_{r_1+1}=x_{r_1+2} \\&\dot{x}_{r_1+2}=x_{r_1+3} \\&\cdots \\&\dot{x}_{p-1}=x_{p} \\&\dot{x}_{p}=f_p(x)+{\sum _{j=1}^p}{g_{p_j}(x)}u_j+d_p\\&y_1=x_1 \\&y_2=x_{r_1-1} \\&\cdots \\&y_p=x_{n-r_p+1} \end{aligned} \end{aligned}$$
(76)

where \(x=[x_1,\ldots ,x_n]^T\) is the state vector of the transformed system (according to the differential flatness formulation), \(u=[u_1,\ldots ,u_p]^T\) is the set of control inputs, \(y=[y_1,\ldots ,y_p]^T\) is the output vector, \(f_i\) are the drift functions and \(g_{i,j}, \ i,j=1,2,\ldots ,p\) are smooth functions corresponding to the control input gains, while \(d_j\) is a variable associated to external disturbances. In holds that \(r_1+r_2+\cdots +r_p=n\). Having written the initial nonlinear system into the canonical (Brunovsky) form it holds

$$\begin{aligned} y_i^{(r_i)}=f_i(x)+{\sum _{j=1}^p}g_{ij}(x)u_j+d_j \end{aligned}$$
(77)

Equivalently, in vector form, one has the following description for the system dynamics

$$\begin{aligned} y^{(r)}=f(x)+g(x)u+d \end{aligned}$$
(78)

where the following vectors and matrices are be defined

$$\begin{aligned} y^{(r)}= & {} [y_1^{(r_1)},\ldots ,y_p^{(r_p)}]\nonumber \\ f(x)= & {} [f_1(x),\ldots ,f_p(x)]^T\nonumber \\ g(x)= & {} [g_1(x),\ldots ,g_p(x)]\nonumber \\ \textit{with} \ g_i(x)= & {} [g_{1i}(x),\ldots ,g_{pi}(x)]^T\nonumber \\ A= & {} diag[A_1,\ldots ,A_p], \ \ B=diag[B_1,\ldots ,B_p] \nonumber \\ C^T= & {} diag[C_1,\ldots ,C_p], \ \ d=[d_1,\ldots ,d_p]^T \end{aligned}$$
(79)

where matrix A has the MIMO canonical form, i.e. with elements

$$\begin{aligned} A_i= & {} \begin{pmatrix} 0 &{}\quad 1 &{}\quad \cdots &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \\ \vdots &{}\quad \vdots &{}\quad \cdots &{}\quad \vdots \\ 0 &{}\quad 0 &{}\quad \cdots &{}\quad 1 \\ 0 &{}\quad 0 &{}\quad \cdots &{}\quad 0 \end{pmatrix}_{{r_i}\,\times \,{r_i}} \nonumber \\ B_i^T= & {} \begin{pmatrix} 0&\quad 0&\quad \cdots&\quad 0&\quad 1 \end{pmatrix}_{1\,\times \,{r_i}}\nonumber \\ C_i= & {} \begin{pmatrix} 1&\quad 0&\quad \cdots&\quad 0&\quad 0 \end{pmatrix}_{1\,{\times }\,{r_i}} \end{aligned}$$
(80)

Thus, Eq. (77) can be written in state-space form

$$\begin{aligned} \dot{x}= & {} Ax+B[f(x)+g(x)u+\tilde{d}] \nonumber \\ y= & {} {C^T}x \end{aligned}$$
(81)

which can be also written in the equivalent form:

$$\begin{aligned} \dot{x}= & {} Ax+Bv+B\tilde{d}\nonumber \\ y= & {} {C^T}x \end{aligned}$$
(82)

where \(v=f(x)+g(x)u\). The reference setpoints for the system’s outputs \(y_1,\ldots ,y_p\) are denoted as \(y_{1m},\ldots ,y_{pm}\), thus for the associated tracking errors it holds

$$\begin{aligned} e_1= & {} y_1-y_{1m} \nonumber \\ e_2= & {} y_2-y_{2m} \nonumber \\&\quad \cdots \nonumber \\ e_p= & {} y_p-y_{pm} \end{aligned}$$
(83)

The error vector of the outputs of the transformed MIMO system is denoted as

$$\begin{aligned} E_1= & {} [e_1,\ldots ,e_p]^T \nonumber \\ y_m= & {} [y_{1m},\ldots ,y_{pm}]^T\nonumber \\&\quad \cdots \nonumber \\ y_m^{(r)}= & {} [y_{1m}^{(r)},\ldots ,y_{pm}^{(r)}]^T \end{aligned}$$
(84)

where \(y_{im}^{(r)}\) denotes the r-th order derivative of the i-th reference output of the MIMO dynamical system. Thus, one can also define the following vectors: (i) a vector containing the state variables of the system and the associated derivatives, (ii) a vector containing the reference outputs of the system and the associated derivatives

$$\begin{aligned} x= & {} \left[ x_1,\ldots ,x_1^{r_1-1},\ldots ,x_p,\cdots ,x_p^{r_p-1}\right] ^T \end{aligned}$$
(85)
$$\begin{aligned} Y_m= & {} \left[ y_{1m},\ldots ,y_{1m}^{r_1-1},\ldots ,y_{pm},\ldots ,y_{pm}^{r_p-1}\right] ^T \end{aligned}$$
(86)

while in a similar manner one can define a vector containing the tracking error of the system’s outputs and the associated derivatives

$$\begin{aligned} e=Y_m-x=[e_{1},\ldots ,e_{1}^{r_1-1},\ldots ,e_{p},\ldots ,e_{p}^{r_p-1}]^T \end{aligned}$$
(87)

It is assumed that matrix g(x) is a nonsingular one, i.e. \(g^{-1}(x)\) exists and is bounded for all \(x{\in }U_x\), where \({U_x}{\subset }R^n\) is a compact set. In any case, the problem of singularities in matrix g(x) can be handled by appropriately modifying the state feedback-based control input.

The objective of the adaptive fuzzy controller, denoted as \(u=u(x,e|\theta )\) is: all the signals involved in the controller’s design are bounded and it holds that \(lim_{t\,{\Rightarrow }\,\infty }e=0\), (ii) the \(H_{\infty }\) tracking performance criterion is succeeded for a prescribed attenuation level.

In the presence of non-gaussian disturbances \(w_d\), successful tracking of the reference signal is denoted by the \(H_{\infty }\) criterion [26, 31]:

$$\begin{aligned} {\int _0^T}{e^T}Qe{dt} \le {\rho ^2} {\int _0^T}{{w_d}^T}{w_d}{dt} \end{aligned}$$
(88)

where \(\rho \) is the attenuation level and corresponds to the maximum singular value of the transfer function G(s) of the linearized model associated to Eqs. (81) and (82).

Control Law

The control signal of the MIMO nonlinear system which has been transformed into the Brunovsky form as described by Eq. (82) contains the unknown nonlinear functions f(x) and g(x). In case that the complete state vector x is measurable these unknown functions can be approximated by

$$\begin{aligned} \hat{f}(x|\theta _f)= & {} \Phi _f(x){\theta _f} \nonumber \\ \hat{g}(x|\theta _g)= & {} \Phi _g(x){\theta _g} \end{aligned}$$
(89)

where

$$\begin{aligned} \Phi _f(x)=\left( {\xi _f^1}(x), {\xi _f^2}(x), \ldots {\xi _f^n}(x)\right) ^T \end{aligned}$$
(90)

with \(\xi _f^i(x), \i =1,\ldots ,n\) being the vector of kernel functions (e.g. normalized fuzzy Gaussian membership functions), where

$$\begin{aligned} \xi _f^i(x)=\left( \phi _f^{i,1}(x), \phi _f^{i,2}(x), \ldots , \phi _f^{i,N}(x)\right) \end{aligned}$$
(91)

thus giving

$$\begin{aligned} \Phi _f(x)=\begin{pmatrix} \phi _f^{1,1}(x) &{}\quad \phi _f^{1,2}(x) &{}\quad \cdots &{}\quad \phi _f^{1,N}(x) \\ \phi _f^{2,1}(x) &{}\quad \phi _f^{2,2}(x) &{}\quad \cdots &{}\quad \phi _f^{2,N}(x) \\ \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots \\ \phi _f^{n,1}(x) &{}\quad \phi _f^{n,2}(x) &{}\quad \cdots &{}\quad \phi _f^{n,N}(x) \end{pmatrix} \end{aligned}$$
(92)

while the weights vector is defined as

$$\begin{aligned} {\theta _f}^T=\begin{pmatrix} \theta _f^{1}, \theta _f^{2}, \ldots , \theta _f^{N} \end{pmatrix} \end{aligned}$$
(93)

\(j=1,\ldots ,N\) is the number of basis functions that is used to approximate the components of function f which are denoted as \(i=1,\ldots ,n\). Thus, one obtains the relation of Eq. (89), i.e. \(\hat{f}(x|\theta _f)={\Phi _f(x)}\theta _f\).

In a similar manner, for the approximation of function g one has

$$\begin{aligned} \Phi _{g}(x)=\begin{pmatrix} {\xi _{g}^1}(x), {\xi _{g}^2}(x), \ldots , {\xi _{g}^N}(x) \end{pmatrix}^T \end{aligned}$$
(94)

with \(\xi _{g}^i(x), \i =1,\ldots ,N\) being the vector of kernel functions (e.g. normalized fuzzy Gaussian membership functions), where

$$\begin{aligned} \xi _{g}^i(x)=\begin{pmatrix} \phi _{g}^{i,1}(x), \phi _{g}^{i,2}(x), \ldots , \phi _{g}^{i,N}(x) \end{pmatrix} \end{aligned}$$
(95)

thus giving

$$\begin{aligned} \Phi _g(x)=\begin{pmatrix} \phi _g^{1,1}(x) &{}\quad \phi _g^{1,2}(x) &{}\quad \cdots &{}\quad \phi _g^{1,N}(x) \\ \phi _g^{2,1}(x) &{}\quad \phi _g^{2,2}(x) &{}\quad \cdots &{}\quad \phi _g^{2,N}(x) \\ \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots \\ \phi _g^{n,1}(x) &{}\quad \phi _g^{n,2}(x) &{}\quad \cdots &{}\quad \phi _g^{n,N}(x) \end{pmatrix} \end{aligned}$$
(96)

while the weights vector is defined as

$$\begin{aligned} \theta _{g}=\begin{pmatrix} {\theta _{g}^1}, {\theta _{g}^2}, \ldots , {\theta _{g}^p} \end{pmatrix} \end{aligned}$$
(97)

where the components of matrix \(\theta _{g}\) are defined as

$$\begin{aligned} {\theta _{g}^j}=\begin{pmatrix} \theta _{g_1}^{j}, \theta _{g_2}^{j}, \ldots , \theta _{g_N}^{j} \end{pmatrix}^T \end{aligned}$$
(98)

\(j=1,\ldots ,p\) is the number of basis functions that is used to approximate the components of function g which are denoted as \(i=1,\ldots ,n\). Thus one obtains about matrix \(\theta _g{\in }R^{N\,{\times }\,p}\)

$$\begin{aligned} \theta _g=\begin{pmatrix} \theta _{g_1}^1 &{}\quad \theta _{g_1}^2 &{}\quad \cdots &{}\quad \theta _{g_1}^p \\ \theta _{g_2}^1 &{}\quad \theta _{g_2}^2 &{}\quad \cdots &{}\quad \theta _{g_2}^p \\ \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots \\ \theta _{g_N}^1 &{}\quad \theta _{g_N}^2 &{}\quad \cdots &{}\quad \theta _{g_N}^p \end{pmatrix} \end{aligned}$$
(99)

It holds that

$$\begin{aligned} g=\begin{pmatrix} g_1 \\ g_2 \\ \cdots \\ g_n \end{pmatrix}= \begin{pmatrix} g_1^{1} &{}\quad g_1^{2} &{}\quad \cdots &{}\quad g_1^{p} \\ g_2^{1} &{}\quad g_2^{2} &{}\quad \cdots &{}\quad g_2^{p} \\ \cdots &{}\quad \cdots &{}\quad \cdots &{}\quad \cdots \\ g_n^{1} &{}\quad g_n^{2} &{}\quad \cdots &{}\quad g_n^{p} \\ \end{pmatrix} \end{aligned}$$
(100)

Using the above, one finally has the relation of Eq. (89), i.e. \(\hat{g}(x|\theta _g)={\Phi _g(x)}\theta _g\). If the state variables of the system are available for measurement then a state-feedback control law can be formulated as

$$\begin{aligned} u=\hat{g}^{-1}(x|\theta _g)[-\hat{f}(x|\theta _f)+y_m^{(r)}-{K^T}e+u_c] \end{aligned}$$
(101)

where \(\hat{f}(x|\theta _f)\) and \(\hat{g}(x|\theta _g)\) are neurofuzzy models to approximate f(x) and g(x), respectively. \(u_c\) is a supervisory control term, e.g. \(H_{\infty }\) control term that is used to compensate for the effects of modelling inaccuracies and external disturbances. Moreover, \(K^T\) is the feedback gain matrix that assures that the characteristic polynomial of matrix \(A-B{K^T}\) will be a Hurwitz one.

Estimation of the State Vector

The control of the system described by Eq. (78) becomes more complicated when the state vector x is not directly measurable and has to be reconstructed through a state observer. The following definitions are used

  • error of the state vector \(e=x-x_m\)

  • error of the estimated state vector \(\hat{e}=\hat{x}-x_m\)

  • observation error \(\tilde{e}=e-\hat{e}=(x-x_m)-(\hat{x}-x_m)\)

When an observer is used to reconstruct the state vector, the control law of Eq. (101) is written as

$$\begin{aligned} u={\hat{g}^{-1}(\hat{x}|\theta _g)[-\hat{f}(\hat{x}|\theta _f)+y_m^{(r)}-{K^T}\hat{e}+{u_c}]} \end{aligned}$$
(102)

Applying Eq. (102) to the nonlinear system described by Eq. (78), results into

$$\begin{aligned} y^{(r)}= & {} f(x)+g(x){\hat{g}^{-1}(\hat{x})}[-\hat{f}(\hat{x})+y_m^{(r)}\nonumber \\&-{K^T} \hat{e}+u_c]+d{\Rightarrow } \nonumber \\ y^{(r)}= & {} f(x)+[g(x)-\hat{g}(\hat{x})+\hat{g}(\hat{x})]{\hat{g}^{-1}(\hat{x})} [-\hat{f}(\hat{x})\nonumber \\&+y_m^{(r)}-{K^T}\hat{e}+u_c]+d{\Rightarrow }\nonumber \\ y^{(r)}= & {} [f(x)-\hat{f}(\hat{x})]+[g(x)-\hat{g}(\hat{x})]u\nonumber \\&+y_m^{(r)}-{K^T}\hat{e}+u_c+d \end{aligned}$$
(103)

It holds \(e=x-x_m \Rightarrow y^{(r)}=e^{(r)}+y_m^{(r)}\). Substituting \(y^{(r)}\) in the above equation gives

$$\begin{aligned} e^{(r)}+y_m^{(r)}= & {} y_m^{(r)}-{K^T\hat{e}}+u_c+[f(x)-\hat{f}(\hat{x})]\nonumber \\&+\,[g(x)-\hat{g}(\hat{x})]u+d \end{aligned}$$
(104)

and equivalently

$$\begin{aligned} \dot{e}= & {} Ae-B{K^T\hat{e}}+B{u_c}+B\{[f(x)-\hat{f}(\hat{x})]\nonumber \\&+\,[g(x)-\hat{g}(\hat{x})]u+\tilde{d}\} \end{aligned}$$
(105)
$$\begin{aligned} e_1= & {} {C^T}e \end{aligned}$$
(106)

where \(e=[e^1,e^2,\ldots ,e^p]^T\) with \(e^i=[e_i,\dot{e}_i,\ddot{e}_i,\ldots ,e_i^{r_i-1}]^T\), \(i=1,2,\ldots ,p\) and equivalently \(\hat{e}=[\hat{e}^1,\hat{e}^2,\ldots ,\hat{e}^p]^T\) with \(\hat{e}^i=[\hat{e}_i,\hat{\dot{e}}_i,\hat{\ddot{e}}_i,\ldots ,\hat{e}_i^{r_i-1}]^T\), \(i=1,2,\ldots ,p\). Matrices A,B and C have been defined in Eq. (80).

A state observer is designed according to Eqs. (105) and (106) and is given by [28]:

$$\begin{aligned} \dot{\hat{e}}= & {} A{\hat{e}}-B{K^T}{\hat{e}}+{K_o}\left[ e_1-{C^T}{\hat{e}}\right] \end{aligned}$$
(107)
$$\begin{aligned} \hat{e}_1= & {} {C^T}{\hat{e}} \end{aligned}$$
(108)

The feedback gain matrix is denoted as \(K{\in }R^{n\,{\times }\,p}\). The observation gain matrix is denoted as \(K_o{\in }R^{n\,{\times }\,p}\) and its elements are selected so as to assure the asymptotic elimination of the observation error.

Application of Flatness-Based Adaptive Fuzzy Control to the MIMO Diesel Engine Model

Differential Flatness of the Diesel Engine

It holds that

$$\begin{aligned} \ddot{x}_1= & {} f_1(x)+g_1(x)u \nonumber \\ \ddot{x}_3= & {} f_2(x)+g_2(x)u \end{aligned}$$
(109)

It holds that

$$\begin{aligned} \dot{x}_1= & {} x_2\nonumber \\ \dot{x}_2= & {} f_1(x)+g_1(x)u\nonumber \\ \dot{x}_3= & {} x_4\nonumber \\ \dot{x}_4= & {} f_2(x)+g_2(x)u \end{aligned}$$
(110)

Moreover, from Eq. (110) it holds

$$\begin{aligned} \begin{pmatrix} \ddot{x}_1\\ \ddot{x}_3 \end{pmatrix}= & {} \begin{pmatrix} f_1(x) \\ f_2(x) \end{pmatrix}+ \begin{pmatrix} g_1(x) \\ g_2(x) \end{pmatrix}u \ \textit{i.e.}\nonumber \\ u= & {} \begin{pmatrix} g_1(x) \\ g_2(x) \end{pmatrix}^{-1} \left\{ \begin{pmatrix} \ddot{x}_1 \\ \ddot{x}_3 \end{pmatrix}- \begin{pmatrix} f_1(x) \\ f_2(x) \end{pmatrix} \right\} \end{aligned}$$
(111)

Therefore, the considered robotic system is a differentially flat one. Next, taking into account also the effects of additive disturbances the dynamic model becomes

$$\begin{aligned} \ddot{x}_1= & {} f_1(x,t)+g_1(x,t)u+d_1\nonumber \\ \ddot{x}_3= & {} f_2(x,t)+g_2(x,t)u+d_2 \end{aligned}$$
(112)
$$\begin{aligned} \begin{pmatrix} \ddot{x}_1 \\ \ddot{x}_3 \end{pmatrix}= & {} \begin{pmatrix} f_1(x,t) \\ f_2(x,t) \end{pmatrix}+ \begin{pmatrix} g_1(x,t) \\ g_2(x,t) \end{pmatrix}u+ \begin{pmatrix} d_1 \\ d_2 \end{pmatrix} \end{aligned}$$
(113)

The following control input is defined

$$\begin{aligned} u= & {} \begin{pmatrix} \hat{g}_1(x,t) \\ \hat{g}_2(x,t) \end{pmatrix}^{-1} \left\{ \begin{pmatrix} \ddot{x}_1^d \\ \ddot{x}_3^d \end{pmatrix} - \begin{pmatrix} \hat{f}_1(x,t) \\ \hat{f}_2(x,t) \end{pmatrix}\right. \nonumber \\&\left. - \begin{pmatrix} {K_1^T} \\ {K_2^T} \end{pmatrix}e+ \begin{pmatrix} u_{c_1} \\ u_{c_2} \end{pmatrix} \right\} \end{aligned}$$
(114)

where \([u_{c_1} \ u_{c_2}]^T\) is a robust control term that is used for the compensation of the model’s uncertainties as well as of the external disturbances and \(K_i^T=[k_1^i,k_2^i,\ldots ,k_{n-1}^i,k_n^i]\). Substituting Eq. (114) into Eq. (113) the closed-loop tracking error dynamics is obtained

$$\begin{aligned} \begin{pmatrix} \ddot{x}_1 \\ \ddot{x}_3 \end{pmatrix}= & {} \begin{pmatrix} f_1(x,t) \\ f_2(x,t) \end{pmatrix}+ \begin{pmatrix} g_1(x,t) \\ g_2(x,t) \end{pmatrix} \begin{pmatrix} \hat{g}_1(x,t) \\ \hat{g}_2(x,t) \end{pmatrix}^{-1}\nonumber \\&\times \left\{ \begin{pmatrix} \ddot{x}_1^d\\ \ddot{x}_3^d \end{pmatrix}- \begin{pmatrix} \hat{f}_1(x,t) \\ \hat{f}_2(x,t) \end{pmatrix}- \begin{pmatrix} {K_1^T} \\ {K_2^T} \end{pmatrix}e\right. \nonumber \\&\left. + \begin{pmatrix} u_{c_1} \\ u_{c_2} \end{pmatrix} \right\} + \begin{pmatrix} d_1 \\ d_2 \end{pmatrix} \end{aligned}$$
(115)

Eq. (115) can now be written as

$$\begin{aligned} \begin{pmatrix} \ddot{x}_1 \\ \ddot{x}_3 \end{pmatrix}= & {} \begin{pmatrix} f_1(x,t) \\ f_2(x,t) \end{pmatrix}+\left\{ \begin{pmatrix} g_1(x,t)-\hat{g}_1(x,t) \\ g_2(x,t)-\hat{g}_2(x,t) \end{pmatrix}\right. \nonumber \\&\left. +\begin{pmatrix} \hat{g}_1(x,t) \\ \hat{g}_2(x,t) \end{pmatrix}\right\} \begin{pmatrix} \hat{g}_1(x,t) \\ \hat{g}_2(x,t) \end{pmatrix}^{-1}{\cdot }\left\{ \begin{pmatrix} \ddot{x}_1^d\\ \ddot{x}_3^d \end{pmatrix}\right. \nonumber \\&\left. - \begin{pmatrix} \hat{f}_1(x,t) \\ \hat{f}_2(x,t) \end{pmatrix}- \begin{pmatrix} {K_1^T} \\ {K_2^T} \end{pmatrix}e+ \begin{pmatrix} u_{c_1} \\ u_{c_2} \end{pmatrix} \right\} + \begin{pmatrix} d_1 \\ d_2 \end{pmatrix}\nonumber \\ \end{aligned}$$
(116)

and using Eq. (114) this results into

$$\begin{aligned} \begin{pmatrix} \ddot{e}_1 \\ \ddot{e}_3 \end{pmatrix}= & {} \begin{pmatrix} f_1(x,t)-\hat{f}_1(x,t) \\ f_2(x,t)-\hat{f}_2(x,t) \end{pmatrix}+\begin{pmatrix} g_1(x,t)-\hat{g}_1(x,t) \\ g_2(x,t)-\hat{g}_2(x,t) \end{pmatrix}u\nonumber \\&-\begin{pmatrix} K_1^T \\ K_2^T \end{pmatrix} e+ \begin{pmatrix} u_{c_1} \\ u_{c_2} \end{pmatrix}+ \begin{pmatrix} d_1 \\ d_2 \end{pmatrix} \end{aligned}$$
(117)

The following description for the approximation error is defined

$$\begin{aligned} w=\begin{pmatrix} f_1(x,t)-\hat{f}_1(x,t) \\ f_2(x,t)-\hat{f}_2(x,t) \end{pmatrix}+\begin{pmatrix} g_1(x,t)-\hat{g}_1(x,t) \\ g_2(x,t)-\hat{g}_2(x,t) \end{pmatrix}{u}\nonumber \\ \end{aligned}$$
(118)

Moreover, the following matrices are defined

$$\begin{aligned} A= & {} \begin{pmatrix} 0 &{}\quad 1 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 1 \\ 0 &{}\quad 0 &{}\quad 0 &{}\quad 0 \end{pmatrix}, \ \ B= \begin{pmatrix} 0 &{}\quad 0 \\ 1 &{}\quad 0 \\ 0 &{}\quad 0 \\ 0 &{}\quad 1 \end{pmatrix} \nonumber \\ K^T= & {} \begin{pmatrix} K_1^1 &{}\quad K_2^1 &{}\quad K_3^1 &{}\quad K_4^1 \\ K_1^2 &{}\quad K_2^2 &{}\quad K_3^2 &{}\quad K_4^2 \end{pmatrix} \end{aligned}$$
(119)

Using matrices A, B, \(K^T\), Eq. (117) is written in the following form

$$\begin{aligned} \dot{e}= & {} \big (A-B{K^T}\big )e+B{u_c}+B\left\{ \begin{pmatrix} f_1(x,t)-\hat{f}_1(x,t) \\ f_2(x,t)-\hat{f}_2(x,t) \end{pmatrix}\right. +\nonumber \\&\left. +\begin{pmatrix} g_1(x,t)-\hat{g}_1(x,t) \\ g_2(x,t)-\hat{g}_2(x,t) \end{pmatrix}{u}+ \tilde{d}\right\} \end{aligned}$$
(120)

When the estimated state vector \(\hat{x}\) is used in the feedback control loop, equivalently to Eq. (105) one has

$$\begin{aligned} \dot{e}= & {} Ae-B{K^T}\hat{e}+B{u_c}+B\left\{ \begin{pmatrix} f_1(x,t)-\hat{f}_1(\hat{x},t) \\ f_2(x,t)-\hat{f}_2(\hat{x},t) \end{pmatrix}\right. +\nonumber \\&\left. +\begin{pmatrix} g_1(x,t)-\hat{g}_1(\hat{x},t) \\ g_2(x,t)-\hat{g}_2(\hat{x},t) \end{pmatrix}{u}+ \tilde{d}\right\} \end{aligned}$$
(121)

and considering that the approximation error w is now denoted as

$$\begin{aligned} w=\begin{pmatrix} f_1(x,t)-\hat{f}_1(\hat{x},t) \\ f_2(x,t)-\hat{f}_2(\hat{x},t) \end{pmatrix}+\begin{pmatrix} g_1(x,t)-\hat{g}_1(\hat{x},t) \\ g_2(x,t)-\hat{g}_2(\hat{x},t) \end{pmatrix}{u}\nonumber \\ \end{aligned}$$
(122)

Eq. (121) can be also written as

$$\begin{aligned} \dot{e}=Ae-B{K^T}\hat{e}+B{u_c}+Bw+B\tilde{d} \end{aligned}$$
(123)

The associated state observer will be described again by Eq. (107) and Eq. (108).

Dynamics of the Observation Error

The observation error is defined as \(\tilde{e}=e-\hat{e}=x-\hat{x}\). Substructing Eq. (107) from Eq. (105) as well as Eq. (108) from Eq. (106) one gets

$$\begin{aligned} \dot{e}-\dot{\hat{e}}= & {} A(e-\hat{e})+B{u_c}+B\{[f(x,t)-\hat{f}(\hat{x},t)]\\&+\,[g(x,t)-\hat{g}(\hat{x},t)]u+ \tilde{d}\}-{K_o}{C^T}(e-\hat{e}) \\ {e_1}-{\hat{e}_1}= & {} {C^T}(e-\hat{e}) \end{aligned}$$

or equivalently

$$\begin{aligned} \dot{\tilde{e}}= & {} A\tilde{e}+B{u_c}+B\{[f(x,t)-\hat{f}(\hat{x},t)]\\&+\,[g(x,t)-\hat{g}(\hat{x},t)]u+\tilde{d}\}-{K_o}{C^T}\tilde{e}\\ \tilde{e}_1= & {} {C^T}\tilde{e} \end{aligned}$$

which can be written as

$$\begin{aligned} \dot{\tilde{e}}= & {} \big (A-{K_o}{C^T}\big ){\tilde{e}}+B{u_c}+B\{[f(x,t)-\hat{f}(\hat{x},t)]\nonumber \\&+\,[g(x,t)-\hat{g}(\hat{x},t)]u+\tilde{d}\} \end{aligned}$$
(124)
$$\begin{aligned} \tilde{e}_1= & {} {C^T}{\tilde{e}} \end{aligned}$$
(125)

or equivalently, it can be written as

$$\begin{aligned} \dot{\tilde{e}}= & {} \big (A-{K_o}{C^T}\big ){\tilde{e}}+B{u_c}+Bw+\tilde{d}\} \end{aligned}$$
(126)
$$\begin{aligned} \tilde{e}_1= & {} {C^T}{\tilde{e}} \end{aligned}$$
(127)

Approximation of Functions f(xt) and g(xt)

Next, the following approximators of the unknown system dynamics are defined

$$\begin{aligned} \hat{f}(\hat{x})=\begin{pmatrix} \hat{f}_1(\hat{x}|\theta _{f}) \ \hat{x}{\in }R^{4\times 1} \ \hat{f}_1(\hat{x}|\theta _{f}) \ \in \ R^{1\times 1} \\ \hat{f}_2(\hat{x}|\theta _{f}) \ \hat{x}{\in }R^{4\times 1} \ \hat{f}_2(\hat{x}|\theta _{f}) \ \in \ R^{1\times 1} \end{pmatrix} \end{aligned}$$
(128)

with kernel functions

$$\begin{aligned} \phi _f^{i,j}(\hat{x})={{{\prod _{j=1}^n}{\mu _{A_j}^i}(\hat{x}_j)} \over {{\sum _{i=1}^N}{\prod _{j=1}^n}{\mu _{A_j}^i}(\hat{x}_j)}} \end{aligned}$$
(129)

where \(l=1,2\), \(\hat{x}\) is the estimate of the state vector and \(\mu _{A_j^{i}}(\hat{x})\) is the i-th membership function of the antecedent (IF) part of the l-th fuzzy rule. Similarly, the following approximators of the unknown system dynamics are defined (Fig. 2)

$$\begin{aligned} \hat{g}(\hat{x})= \begin{pmatrix} \hat{g}_1(\hat{x}|\theta _{g}) \ \hat{x}{\in }R^{4\times 1} \ \hat{g}_1(\hat{x}|\theta _{g}) \ \in \ R^{1\times 2} \\ \hat{g}_2(\hat{x}|\theta _{g}) \ \hat{x}{\in }R^{4\times 1} \ \hat{g}_2(\hat{x}|\theta _{g}) \ \in \ R^{1\times 2} \end{pmatrix} \end{aligned}$$
(130)
Fig. 2
figure 2

Neurofuzzy approximator used for estimating the unknown system dynamics

The values of the weights that result in optimal approximation are

$$\begin{aligned} {\theta _{f}^{*}}= & {} \textit{arg} \ \textit{min}_{\theta _{f}{\in }M_{\theta _{f}}}[\textit{sup}_{\hat{x}{\in }U_{\hat{x}}}(f(x)-\hat{f}(\hat{x}|\theta _{f}))]\nonumber \\ {\theta _{g}^{*}}= & {} \textit{arg} \ \textit{min}_{\theta _{g}{\in }M_{\theta _{g}}}[\textit{sup}_{\hat{x}{\in }U_{\hat{x}}}(g(x)-\hat{g}(\hat{x}|\theta _{g}))] \end{aligned}$$
(131)

where the variation ranges for the weights are defined as

$$\begin{aligned} M_{\theta _{f}}= & {} \{\theta _{f}{\in }{R^h}: \ ||\theta _{f}||{\le }m_{\theta _{f}} \}\nonumber \\ M_{\theta _{g}}= & {} \{\theta _{g}{\in }{R^h}: \ ||\theta _{g}||{\le }m_{\theta _{g}} \} \end{aligned}$$
(132)

The value of the approximation error defined in Eq. (118) that corresponds to the optimal values of the weights vectors \(\theta _{f}^{*}\) and \(\theta _{g}^{*}\) one has

$$\begin{aligned} w=\begin{pmatrix} f(x,t)-\hat{f}(\hat{x}|\theta _f^{*}) \\ \end{pmatrix}+\begin{pmatrix} g(x,t)-\hat{g}(\hat{x}|\theta _g^{*}) \\ \end{pmatrix}{u} \end{aligned}$$
(133)

which is next written as

$$\begin{aligned} w= & {} \begin{pmatrix} f(x,t)-\hat{f}(\hat{x}|\theta _{f})+\hat{f}(\hat{x}|\theta _{f})-\hat{f}(\hat{x}|\theta _f^{*}) \\ \end{pmatrix}\nonumber \\&+\begin{pmatrix} g(x,t)-\hat{g}(\hat{x}|\theta _{g})+\hat{g}(\hat{x}|\theta _{g})-\hat{g}(\hat{x}|\theta _g^{*}) \\ \end{pmatrix}{u} \end{aligned}$$
(134)

which can be also written in the following form

$$\begin{aligned} w=\begin{pmatrix} w_{a}+w_{b} \end{pmatrix} \end{aligned}$$
(135)

where

$$\begin{aligned} w_{a}= & {} \{[f(x,t)-\hat{f}(\hat{x}|\theta _{f})]+[g(x,t)-\hat{g}(\hat{x}|\theta _{g})]\}u\nonumber \\ \end{aligned}$$
(136)
$$\begin{aligned} w_{b}= & {} \{[\hat{f}(\hat{x}|\theta _f)-\hat{f}(\hat{x}|\theta _f^{*})]+[\hat{g}(\hat{x},\theta _g) -\hat{g}(\hat{x}|\theta _g^{*})]\}u\nonumber \\ \end{aligned}$$
(137)

Moreover, the following weights error vectors are defined

$$\begin{aligned} \tilde{\theta }_{f}= & {} \theta _{f}-\theta _f^{*} \nonumber \\ \tilde{\theta }_{g}= & {} \theta _{g}-\theta _g^{*} \end{aligned}$$
(138)

Lyapunov Stability Analysis

Design of the Lyapunov Function

The adaptation law of the neurofuzzy approximators weights \(\theta _f\) and \(\theta _g\) as well as the equation of the supervisory control term \(u_c\) are derived from the requirement the first derivative of the Lyapunov function to be a negative one

$$\begin{aligned} V={1 \over 2}{{\hat{e}^T}{P_1}{\hat{e}}}+{1 \over 2}{{\tilde{e}^T}{P_2}{\tilde{e}}}+{1 \over {2{\gamma _1}}}{\tilde{\theta }_f^T}{\tilde{\theta }_f}+{1 \over {2{\gamma _2}}}tr[{\tilde{\theta }_g^T}{\tilde{\theta }_g}]\nonumber \\ \end{aligned}$$
(139)

The selection of the Lyapunov function is based on the following principle of indirect adaptive control \(\hat{e}: \lim _{t \rightarrow \infty }{\hat{x}(t)}={x_d}(t)\) and \(\tilde{e}: \lim _{t \rightarrow \infty }{\hat{x}(t)}=x(t)\). This yields \(\lim _{t \rightarrow \infty }x(t)={x_d}(t)\). Substituting Eqs. (107), (108) and Eqs. (124), (125) into Eq. (139) and differentiating results into

$$\begin{aligned} \dot{V}= & {} {1 \over 2}{\dot{\hat{e}}^T}{P_1}{\hat{e}}+{1\over 2}{\hat{e}^T}{P_1}{\dot{\hat{e}}}+{1 \over 2}{\dot{\tilde{e}}^T}{P_2}{{\tilde{e}}}+{1 \over 2}{\tilde{e}^T}{P_2}{\dot{\tilde{e}}}\nonumber \\&+\,{1 \over {\gamma _1}}{\dot{\tilde{\theta }}_f^T}{{\tilde{\theta }}_f}+{1 \over {\gamma _2}}tr\left[ {\dot{\tilde{\theta }}}_g^T{\tilde{\theta }_g}\right] \Rightarrow \end{aligned}$$
(140)
$$\begin{aligned} \dot{V}= & {} {1 \over 2}\Big \{\big (A-BK^T\big )\hat{e}+{K_o}{C^T}\tilde{e}\Big \}^T{P_1}{\hat{e}}\nonumber \\&+\,{1 \over 2}{{\hat{e}}^T}{P_1}\Big \{\big (A-BK^T\big )\hat{e}+{K_o}{C^T}\tilde{e}\Big \}\nonumber \\&+\,{1 \over 2} \Big \{\big (A-{K_o}C^T\big )\tilde{e}+B{u_c}+B\tilde{d}+Bw\Big \}^T{P_2}{\tilde{e}}\nonumber \\&+\,{1 \over 2} {\tilde{e}^T}{P_2}\Big \{\big (A-{K_o}C^T\big )\tilde{e}+Bu_c+B\tilde{d}+Bw \Big \}\nonumber \\&+\,{1 \over {\gamma _1}}{\dot{\tilde{\theta }}_f^T}{{\tilde{\theta }}_f}+{1 \over {\gamma _2}}tr\left[ {\dot{\tilde{\theta }}}_g^T{\tilde{\theta }_g}\right] \Rightarrow \end{aligned}$$
(141)
$$\begin{aligned} \dot{V}= & {} {1 \over 2}\Big \{ {\hat{e}^T}\big (A-B{K^T}\big )^T+{\tilde{e}^T}C{K_o^T}\Big \}{P_1}{\hat{e}}\nonumber \\&+\,{1 \over 2}{\hat{e}^T}{P_1}\Big \{\big (A-B{K^T}\big ){\hat{e}}+ {K_o}{C^T}{\tilde{e}}\Big \}\nonumber \\&+\,{1 \over 2} \Big \{{\tilde{e}^T}\big (A-{K_o}C^T\big )^T+{u_c^T}{B^T}+{w^T}{B^T}+{\tilde{d}^T}{B^T} \Big \}{P_2}{\tilde{e}}\nonumber \\&+\,{1 \over 2}{\tilde{e}^T}{P_2}\Big \{\big (A-{K_o}{C^T}\big ){\tilde{e}}+B{u_c}+Bw+B\tilde{d}\Big \}\nonumber \\&+\,{1 \over {\gamma _1}}{\dot{\tilde{\theta }}_f^T}{{\tilde{\theta }}_f}+{1 \over {\gamma _2}}tr\left[ {\dot{\tilde{\theta }}}_g^T{\tilde{\theta }_g}\right] \Rightarrow \end{aligned}$$
(142)
$$\begin{aligned} \dot{V}= & {} {1 \over 2}{\hat{e}^T}{\big (A-BK^T\big )^T}{P_1}{\hat{e}}+ {1 \over 2}{\tilde{e}^T}C{K_o^T}{P_1}{\hat{e}}\nonumber \\&+\,{1 \over 2}{\hat{e}^T{P_1}\big (A-BK^T\big )\hat{e}}+{1 \over 2}{\hat{e}^T}{P_1}{K_o}{C^T}{\tilde{e}}\nonumber \\&+\,{1 \over 2}{\tilde{e}^T}{\big (A-{K_oC^T}\big )^T}{P_2}{\tilde{e}} +{1 \over 2}(u_c^T+w^T+\tilde{d}^T){B^T}{P_2}\tilde{e}\nonumber \\&+ \,{1 \over 2}{\tilde{e}^T}{P_2}\big (A-{K_o}C^T\big ){\tilde{e}}+{1 \over 2}{\tilde{e}^T}{P_2}B(u_c+w+\tilde{d})\nonumber \\&+\,{1 \over {\gamma _1}}{\dot{\tilde{\theta }}_f^T}{{\tilde{\theta }}_f}+{1 \over {\gamma _2}}tr\left[ {\dot{\tilde{\theta }}}_g^T{\tilde{\theta }_g}\right] \end{aligned}$$
(143)

Assumption 1

For given positive definite matrices \(Q_1\) and \(Q_2\) there exist positive definite matrices \(P_1\) and \(P_2\), which are the solution of the following Riccati equations [28]

$$\begin{aligned}&{\big (A-BK^T\big )^T}{P_1}+{P_1}\big (A-BK^T\big )+Q_1=0 \end{aligned}$$
(144)
$$\begin{aligned}&{\big (A-{K_o}C^T\big )}^T{P_2}+{P_2}{\big (A-{K_o}C^T\big )}\nonumber \\&\quad -\,{P_2}B\left( {2 \over r}-{1 \over {\rho ^2}}\right) {B^T}{P_2}+{Q_2}=0 \end{aligned}$$
(145)

The conditions given in Eqs. (144) to (145) are related to the requirement that the systems described by Eqs. (107), (108) and Eqs. (124), (125) are strictly positive real. Substituting Eqs. (144) to (145) into \(\dot{V}\) yields

$$\begin{aligned} \dot{V}= & {} {1 \over 2}{\hat{e}^T}\Big \{\big (A-BK^T\big )^T{P_1}+{P_1}\big (A-BK^T\big )\Big \}{\hat{e}}\nonumber \\&+\,{\tilde{e}^T}C{K_o^T}{P_1}{\hat{e}}+\,{1 \over 2}{\tilde{e}^T}\Big \{\big (A-{K_o}C^T\big )^T{P_2}\nonumber \\&+\,{P_2}\big (A-{K_o}{C^T}\big )\Big \}{\tilde{e}}+\,{\tilde{e}^T}{P_2}B(u_c+w+\tilde{d})\nonumber \\&+{1 \over {\gamma _1}}{\dot{\tilde{\theta }}_f^T}{{\tilde{\theta }}_f}+{1 \over {\gamma _2}}tr\left[ {\dot{\tilde{\theta }}}_g^T{\tilde{\theta }_g}\right] \end{aligned}$$
(146)

i.e.

$$\begin{aligned} \dot{V}= & {} -{1 \over 2}{{\hat{e}^T}{Q_1}{\hat{e}}}+{\tilde{e}^T}C{K_o^T}{P_1}{\hat{e}}\nonumber \\&-\,{1 \over 2} \tilde{e}^T \left\{ {Q_2}-{P_2}B\left( {2 \over r}-{1 \over {\rho ^2}}\right) {B^T}{P_2}\right\} {\tilde{e}}\nonumber \\&+\,{\tilde{e}^T}{P_2}{B}(u_c+w+\tilde{d})+{1 \over {\gamma _1}}{\dot{\tilde{\theta }}_f^T}{{\tilde{\theta }}_f}+{1 \over {\gamma _2}}tr\left[ {\dot{\tilde{\theta }}}_g^T{\tilde{\theta }_g}\right] \nonumber \\ \end{aligned}$$
(147)

The supervisory control \(u_c\) is decomposed in two terms, \(u_a\) and \(u_b\).

Fig. 3
figure 3

The proposed \(H_{\infty }\) control scheme

  • The control term \(u_a\) is given by

    $$\begin{aligned} u_a=-{1 \over r}{\tilde{e}^T}{P_2}B+{\Delta }{u_a} \end{aligned}$$
    (148)

    where assuming that the measurable elements of vector \(\tilde{e}\) are \(\{\tilde{e}_1, \tilde{e_3}, \ldots , \tilde{e_k}\}\), the term \({\Delta }u_a\) is such that

    $$\begin{aligned}&-\,{1 \over r}{\tilde{e}^T}{P_2}B+{\Delta }{u_a}\nonumber \\&\quad =-{1 \over r} \begin{pmatrix} p_{11}\tilde{e}_1+p_{13}\tilde{e}_3+\cdots +p_{1k}\tilde{e}_k \\ p_{13}\tilde{e}_1+p_{33}\tilde{e}_3+\cdots +p_{3k}\tilde{e}_k \\ \cdots \ \cdots \cdots \\ p_{1k}\tilde{e}_1+p_{3k}\tilde{e}_3+\cdots +p_{kk}\tilde{e}_k \\ \end{pmatrix} \end{aligned}$$
    (149)
  • The control term \(u_b\) is given by

    $$\begin{aligned} u_b=-[{({P_2}B)^T}({P_2}B)]^{-1}({P_2}B)^TC{K_o^T}{P_1}{\hat{e}} \end{aligned}$$
    (150)
    • \(u_a\) is an \(H_{\infty }\) control used for the compensation of the approximation error w and the additive disturbance \(\tilde{d}\). Its first component \(-{1 \over r}{\tilde{e}^T}{P_2}B\) has been chosen so as to compensate for the term \({1 \over r}{\tilde{e}^T}{P_2}B{B^T}{P_2}\tilde{e}\), which appears in Eq. (147). By including also the second component \({\Delta }{u_a}\), one has that \(u_a\) is computed based on the feedback only the measurable variables \(\{\tilde{e}_1, \tilde{e_3}, \ldots , \tilde{e_k}\}\), out of the complete vector \(\tilde{e}=[\tilde{e}_1,\tilde{e}_2,\ldots ,\tilde{e}_n]\). Eq. (148) is finally rewritten as \(u_a=-{1 \over r}{\tilde{e}^T}{P_2}B+{\Delta }{u_a}\).

    • \(u_b\) is a control used for the compensation of the observation error (the control term \(u_b\) has been chosen so as to satisfy the condition \({\tilde{e}^T}{P_2}B{u_b}=-{\tilde{e}^T}C{K_o^T}{P_1}\hat{e}\).

The control scheme is depicted in Fig. 3

Substituting Eqs. (148) and (150) in \(\dot{V}\) and assuming that Eqs. (144) and (145) hold, one gets

$$\begin{aligned} \dot{V}= & {} -{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}+{\tilde{e}^T}C{K_o^T}{P_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}\nonumber \\&+\,{1 \over r}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}-{1 \over {2\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}\nonumber \\&+\,{\tilde{e}^T}{P_2}B{u_a}+{\tilde{e}^T}{P_2}B{u_b}+{\tilde{e}^T}{P_2}B(w+\tilde{d})\nonumber \\&+\,{1 \over {\gamma _1}}{\dot{\tilde{\theta }}_f^T}{{\tilde{\theta }}_f} +{1 \over {\gamma _2}}tr\left[ {\dot{\tilde{\theta }}}_g^T{\tilde{\theta }_g}\right] \end{aligned}$$
(151)

or equivalently,

$$\begin{aligned} \dot{V}= & {} -{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}}\nonumber \\&+\, {\tilde{e}^T}{P_2}B(w+\tilde{d}+{\Delta }u_a)+{1 \over {\gamma _1}}{\dot{\tilde{\theta }}_f^T}{{\tilde{\theta }}_f}\nonumber \\&+\,{1 \over {\gamma _2}}tr\left[ {\dot{\tilde{\theta }}}_g^T{\tilde{\theta }_g}\right] \end{aligned}$$
(152)

It holds that \(\dot{\tilde{\theta }}_f=\dot{\theta }_f-\dot{\theta _f^*}=\dot{\theta _f}\) and \(\dot{\tilde{\theta }}_g=\dot{\theta }_g-\dot{\theta _g^*}=\dot{\theta _g}\). The following weight adaptation laws are considered:

$$\begin{aligned} \dot{\theta }_f= & {} -{\gamma _1}{\Phi (\hat{x})^T}{B^T}{P_2}\tilde{e} \nonumber \\ \dot{\theta }_g= & {} -{\gamma _2}{\Phi (\hat{x})^T}{B^T}{P_2}\tilde{e}{u^T} \end{aligned}$$
(153)

where assuming N fuzzy rules and associated kernel functions the matrices dimensions are \({\theta _f}{\in }R^{N\times 1}\), \({\theta _g}{\in }R^{N\times 2}\), \(\Phi (x){\in }R^{2\,{\times }\,N}\), \(B{\in }R^{4\times 2}\), \(P{\in }R^{4\times 4}\) and \(\tilde{e}{\in }R^{4\times 1}\).

The update of \(\theta _f\) is a gradient type algorithm. The update of \(\theta _g\) is also a gradient type algorithm, where \(u_c\) implicitly tunes the adaptation gain \(\gamma _2\) [33, 34]. Substituting Eq. (153) in \(\dot{V}\) gives

$$\begin{aligned} \dot{V}= & {} -{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}}\nonumber \\&+\,{B^T}{P_2}{\tilde{e}(w+d+{\Delta }u_a)}\nonumber \\&+\,{1 \over \gamma _1}(-\gamma _1){\tilde{e}^T}{P_2}B\Phi (\hat{x})(\theta _f-\theta _f^{*})\nonumber \\&+\,{1 \over \gamma _2}(-\gamma _2)tr\left[ {u}{\tilde{e}^T}{P_2}B\Phi (\hat{x})(\theta _g-\theta _g^{*})\right] \end{aligned}$$
(154)

or

$$\begin{aligned} \dot{V}= & {} -{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}}\nonumber \\&+\, {B^T}{P_2}{\tilde{e}(w+\tilde{d}+{\Delta }u_a)}\nonumber \\&+\,{1 \over \gamma _1}(-\gamma _1){\tilde{e}^T}{P_2}B\Phi (\hat{x})(\theta _f-\theta _f^{*})\nonumber \\&+\,{1 \over \gamma _2}(-\gamma _2)tr\left[ {u}{\tilde{e}^T}{P_2}B(\hat{g}(\hat{x}|\theta _g)-\hat{g}(\hat{x}|\theta _g^{*})\right] \end{aligned}$$
(155)

Taking into account that \(u\ {\in } \ R^{2\times 1}\) and \({\tilde{e}^T}PB(\hat{g}(x|\theta _g)-\hat{g}(x|\theta _g^{*})) \ {\in } \ R^{1\times 2}\) it holds

$$\begin{aligned} \dot{V}= & {} -{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}}\nonumber \\&+\, {B^T}{P_2}{\tilde{e}(w+\tilde{d}+{\Delta }u_a)}\nonumber \\&+\,{1 \over \gamma _1}(-\gamma _1){\tilde{e}^T}{P_2}B\Phi (\hat{x})(\theta _f-\theta _f^{*})\nonumber \\&+\,{1 \over \gamma _2}(-\gamma _2)tr\left[ {\tilde{e}^T}{P_2}B(\hat{g}(\hat{x}|\theta _g)-\hat{g}(\hat{x}|\theta _g^{*})){u}\right] \end{aligned}$$
(156)

Since \({\tilde{e}^T}{P_2}B(\hat{g}(\hat{x}|\theta _g)-\hat{g}(\hat{x}|\theta _g^{*})){u}{\in }R^{1\times 1}\) it holds

$$\begin{aligned}&tr({\tilde{e}^T}{P_2}B(\hat{g}(x|\theta _g)-\hat{g}(x|\theta _g^{*}){u})\nonumber \\&\quad ={\tilde{e}^T}{P_2}B(\hat{g}(x|\theta _g)-\hat{g}(x|\theta _g^{*})){u} \end{aligned}$$
(157)

Therefore, one finally obtains

$$\begin{aligned} \dot{V}= & {} -{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}}\nonumber \\&+\,{B^T}{P_2}{\tilde{e}(w+\tilde{d}+{\Delta }u_a)}\nonumber \\&+\,{1 \over \gamma _1}(-\gamma _1){\tilde{e}^T}{P_2}B\Phi (\hat{x})(\theta _f-\theta _f^{*})\nonumber \\&+\,{1 \over \gamma _2}(-\gamma _2){\tilde{e}^T}{P_2}B(\hat{g}(\hat{x}|\theta _g)-\hat{g}(\hat{x}|\theta _g^{*})){u} \end{aligned}$$
(158)

Next, the following approximation error is defined

$$\begin{aligned} w_{\alpha }=[\hat{f}(\hat{x}|\theta _f^{*})-\hat{f}(\hat{x}|\theta _f)] +[\hat{g}(\hat{x}|\theta _g^{*})-\hat{g}(\hat{x}|\theta _g)]u \end{aligned}$$
(159)

Thus, one obtains

$$\begin{aligned} \dot{V}= & {} -{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}}\nonumber \\&+\,{B^T}{P_2}{\tilde{e}(w+\tilde{d})}+{\tilde{e}^T}{P_2}B{w_{\alpha }} \end{aligned}$$
(160)

Denoting the aggregate approximation error and disturbances vector as

$$\begin{aligned} w_1=w+\tilde{d}+w_{\alpha }+{\Delta }u_a \end{aligned}$$
(161)

the derivative of the Lyapunov function becomes

$$\begin{aligned} \dot{V}= & {} -{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}\nonumber \\&-\,{{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}}+{\tilde{e}^T}{P_2}B{w_1} \end{aligned}$$
(162)

which in turn is written as

$$\begin{aligned} \dot{V}= & {} -{1 \over 2}{\hat{e}^T}{Q_1}{\hat{e}}-{1 \over 2}{\tilde{e}^T}{Q_2}{\tilde{e}}-{{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}}\nonumber \\&+\,{1 \over 2}{\tilde{e}^T}PB{w_1}+{1 \over 2}{w_1^T}{B^T}{P_2}\tilde{e} \end{aligned}$$
(163)

Lemma The following inequality holds

$$\begin{aligned}&{1 \over 2}{\tilde{e}^T}{P_2}B{w_1}+{1 \over 2}{w_1^T}{B^T}{P_2}{\tilde{e}}-{1 \over 2{\rho ^2}}{\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}\nonumber \\&\quad \le {1 \over 2}{\rho ^2}{w_1^T}{w_1} \end{aligned}$$
(164)

Proof The binomial \(({\rho }a-{1 \over \rho }b)^2 \ge 0\) is considered. Expanding the left part of the above inequality one gets

$$\begin{aligned}&{\rho ^2}{a^2}+{1 \over {\rho ^2}}{b^2}-2ab \ge 0 \Rightarrow \nonumber \\&{1\over 2}{\rho ^2}{a^2}+{1 \over {2\rho ^2}}{b^2}-ab \ge 0 \Rightarrow \nonumber \\&ab-{1 \over {2\rho ^2}}{b^2} \le {1 \over 2}{\rho ^2}{a^2} \Rightarrow \nonumber \\&{1 \over 2}ab+{1 \over 2}ab-{1 \over {2\rho ^2}}{b^2} \le {1 \over 2}{\rho ^2}{a^2} \end{aligned}$$
(165)

The following substitutions are carried out: \(a=w_1\) and \(b=\tilde{e}^T{P_2}B\) and the previous relation becomes

$$\begin{aligned}&{1 \over 2}{w_1^T}{B^T}{P_2}{\tilde{e}}+{1 \over 2}{\tilde{e}^T}{P_2}B{w_1}-{{1 \over {2\rho ^2}} {\tilde{e}^T}{P_2}B{B^T}{P_2}{\tilde{e}}}\nonumber \\&\quad \le {1 \over 2} {\rho ^2}{w_1^T}{w_1} \end{aligned}$$
(166)

The above relation is used in \(\dot{V}\), and the right part of the associated inequality is enforced

$$\begin{aligned} \dot{V} {\le } -{1 \over 2}{\hat{e}^T{Q_1}{\hat{e}}}-{1 \over 2}{\tilde{e}^T{Q_2}{\tilde{e}}}+{1 \over 2}{\rho ^2}{w_1^T}{w_1} \end{aligned}$$
(167)

Thus, Eq. (167) can be written as

$$\begin{aligned} \dot{V} \le -{1 \over 2}{E^T}QE+{1 \over 2}{\rho ^2}{w_1^T}{w_1} \end{aligned}$$
(168)

where

$$\begin{aligned} E=\begin{pmatrix} \hat{e} \\ \tilde{e} \end{pmatrix}, \ \ Q=\begin{pmatrix} Q_1 &{} 0 \\ 0 &{} Q_2 \end{pmatrix}=diag[Q_1,Q_2] \end{aligned}$$
(169)

Hence, the \(H_{\infty }\) performance criterion is derived. For \(\rho \) sufficiently small Eq. (167) will be true and the \(H_{\infty }\) tracking criterion will be satisfied. In that case, the integration of \(\dot{V}\) from 0 to T gives

$$\begin{aligned}&{\int _0^T}{\dot{V}(t)}dt \le -{1 \over 2} {\int _0^T}{||E||^2}dt+{1 \over 2}{\rho ^2}{\int _0^T}{||w_1||^2}dt \Rightarrow \nonumber \\&2V(T)-2V(0) \le -{\int _0^T}{||E||_Q^2}dt+{\rho ^2}{\int _0^T}{||w_1||^2}dt \Rightarrow \nonumber \\&2V(T)+{\int _0^T}{||E||_Q^2}dt \le 2V(0)+{\rho ^2}{\int _0^T}{||w_1||^2}dt \end{aligned}$$
(170)

It is assumed that there exists a positive constant \(M_w>0\) such that \(\int _0^{\infty }{||w_1||^2}dt \le M_w\). Therefore for the integral \(\int _0^{T}{||E||_Q^2}dt\) one gets

$$\begin{aligned} {\int _0^{\infty }}{||E||_Q^2}dt \le 2V(0)+{\rho ^2}{M_w} \end{aligned}$$
(171)

Thus, the integral \({\int _0^{\infty }}{||E||_Q^2}dt\) is bounded and according to Barbalat’s Lemma

$$\begin{aligned} \lim \nolimits _{t \rightarrow \infty }{E(t)}=0 \Rightarrow \nonumber \\ \begin{array}{c} \lim _{t \rightarrow \infty }{\hat{e}(t)}=0 \\ \lim _{t \rightarrow \infty }{\tilde{e}(t)}=0 \end{array} \end{aligned}$$
(172)

Therefore \(\lim _{t \rightarrow \infty }{e(t)}=0\).

Fig. 4
figure 4

a Tracking of a reference set-point 1 by the state variables \(z_i, \ i=1,\ldots ,4\) of the transformed Diesel engine model. b Tracking of a reference set-point 1 by the state variables \(x_i, \ i=1,\ldots ,3\) of the initial nonlinear Diesel engine model

The Role of Riccati Equation Coefficients in \(H_{\infty }\) Control Robustness

The linear system of Eqs. (124) and (125) is considered again

$$\begin{aligned} \dot{\tilde{e}}= & {} \big (A-{K_o}C^T\big ){\tilde{e}}+B{u_c}+B\{[f(x,t)\\&-\,\hat{f}(\hat{x},t)]+[g(x,t)-\hat{g}(\hat{x},t)]u+\tilde{d}\}\\ e_1= & {} {C^T}{\tilde{e}} \end{aligned}$$

The aim of \(H_{\infty }\) control is to eliminate the impact of the modelling errors \(w=[f(x,t)-\hat{f}(\hat{x},t)]+[g(x,t)-\hat{g}(\hat{x},t)]u\) and the external disturbances \(\tilde{d}\) which are not white noise signals. This implies the minimization of the quadratic cost function [3537]:

$$\begin{aligned} J(t)= & {} {1 \over 2} \int _{0}^{T} {\tilde{e}^T(t)}\tilde{e}(t)+r{u_c^T(t)}u_c(t)\nonumber \\&-{\rho ^2}{(w+\tilde{d})^T}{(w+\tilde{d})}dt, \ \ r, \rho >0 \end{aligned}$$
(173)

The weight r determines how much the control signal should be penalized and the weight \(\rho \) determines how much the disturbances influence should be rewarded in the sense of a mini-max differential game. The control input \(u_c\) has been defined as the sum of the terms described in Eqs. (148) and (150).

The parameter \(\rho \) in Eq. (173), is an indication of the closed-loop system robustness. If the values of \(\rho >0\) are excessively decreased with respect to r, then the solution of the Riccati equation is no longer a positive definite matrix. Consequently there is a lower bound \(\rho _{min}\) of \(\rho \) for which the \(H_{\infty }\) control problem has a solution. The acceptable values of \(\rho \) lie in the interval \([\rho _{min},\infty )\). If \(\rho _{min}\) is found and used in the design of the \(H_{\infty }\) controller, then the closed-loop system will have increased robustness. Unlike this, if a value \(\rho > \rho _{min}\) is used, then an admissible stabilizing \(H_{\infty }\) controller will be derived but it will be a suboptimal one. The Hamiltonian matrix

$$\begin{aligned} H=\begin{pmatrix} A-{K_o}{C^T} &{}\quad -({2 \over r}-{1 \over \rho ^2})B{B^T} \\ -Q &{}\quad -({A-{K_o}{C^T}})^T \end{pmatrix} \end{aligned}$$
(174)

provides a criterion for the existence of a solution of the Riccati equation Eq. (145). A necessary condition for the solution of the algebraic Riccati equation to be a positive semi-definite symmetric matrix is that H has no imaginary eigenvalues [37].

Fig. 5
figure 5

a Tracking of a reference set-point 2 by the state variables \(z_i, \ i=1,\ldots ,4\) of the transformed Diesel engine model. b Tracking of a reference set-point 2 by the state variables \(x_i, \ i=1,\ldots ,3\) of the initial nonlinear Diesel engine model

Fig. 6
figure 6

a Tracking of a reference set-point 3 by the state variables \(z_i, \ i=1,\ldots ,4\) of the transformed Diesel engine model. b Tracking of a reference set-point 3 by the state variables \(x_i, \ i=1,\ldots ,3\) of the initial nonlinear Diesel engine model

Fig. 7
figure 7

a Tracking of a reference set-point 4 by the state variables \(z_i, \ i=1,\ldots ,4\) of the transformed Diesel engine model. b Tracking of a reference set-point 4 by the state variables \(x_i, \ i=1,\ldots ,3\) of the initial nonlinear Diesel engine model

Fig. 8
figure 8

a Tracking of a reference set-point 5 by the state variables \(z_i, \ i=1,\ldots ,4\) of the transformed Diesel engine model. b Tracking of a reference set-point 5 by the state variables \(x_i, \ i=1,\ldots ,3\) of the initial nonlinear Diesel engine model

Simulation Tests

The performance of the proposed observer-based adaptive fuzzy MIMO controller was tested in the MIMO nonlinear model of the turbocharged Diesel engine (Fig. 1). The differentially flat model of the Diesel Engine and its transformation to the Brunovksy form has been analyzed in “Nonlinear Control of the Diesel Engine Using Differential Flatness Theory” section.

The state feedback gain was \(K{\in }R^{2\times 4}\). The basis functions used in the estimation of \(f_i(\hat{x},t), \ i=1,2\) and \(g_{ij}(\hat{x},t), \ i=1,2, \ j=1,2\) were \(\mu _{A_j}(\hat{x})= e^{({{\hat{x}-c_j} \over \sigma })^2}, j=1,\ldots ,2\). Since there are four inputs \(\hat{x}_1\), \(\dot{\hat{x}}_1\) and \(\hat{x}_3\), \(\dot{\hat{x}}_3\) and each one of them consists of 3 fuzzy sets, for the approximation of functions \(f_i(\hat{x},t) \ i=1,3\), there will be 81 fuzzy rules of the form:

$$\begin{aligned}&R^l: \textit{IF} \ \ {\hat{x}_1} \ \ \textit{is} \ \ A_1^l \ \ \textit{AND} \ \ {\dot{\hat{x}}_1} \ \ \textit{is} \ \ A_2^l\nonumber \\&\textit{AND} \ \ {\hat{x}_3} \ \ \textit{is} \ \ A_3^l \ \ \textit{AND} \ \ {\dot{\hat{x}}_3} \ \ \textit{is} \ \ A_4^l \ \ \textit{THEN} \ \ \hat{f}_i^l \ \ \textit{is} \ \ b^l \end{aligned}$$
(175)

The aggregate output of the neuro-fuzzy approximator (rule-base) is \(\hat{f}_i(\hat{x},t)={ {\sum _{l=1}^{81}}{\hat{f}_i^l}{\prod _{i=1}^4}{\mu _{A_i}^l(\hat{x}_i)} \over {\sum _{l=1}^{81}}{\prod _{i=1}^4}{\mu _{A_i}^l(\hat{x}_i)}}\).

The estimation of the control input gain functions \(\hat{g}_{ij}(\hat{x},t) i=1,2\) was derived in a similar way. The overall simulation time was \(t_s=40 sec\). The sampling period was taken to be 0.01 sec. In the beginning of the training of the neuro-fuzzy approximators their weights were initialized to zero. Moreover, the elements of the diesel engine’s state vector were also initialized to zero. The positive definite matrices \(P_1{\in }R^{4\times 4}\) and \(P_2{\in }R^{4\times 4}\) stem from the solution of the algebraic Riccati equations Eqs. (144) and (145), for \(Q_1\) and \(Q_2\) also positive definite.

The approximations \(\hat{f}\) and \(\hat{g}\) were used in the derivation of the control law, given by Eq. (102). To show the disturbance rejection capability of the proposed adaptive fuzzy controller, at the beginning of the second half of the simulation time additive sinusoidal disturbances of amplitude \(A=0.5\) and period \(T=10\)sec were applied to the diesel engine model.

The performance of the differential flatness theory-based adaptive fuzzy control loop was tested in the case of tracking of different reference setpoints. The obtained results are depicted in Figs. 4, 5, 6, 7, and 8. It can be observed that the proposed adaptive fuzzy control scheme succeeded fast and accurate tracking of all these setpoints.

The root mean square error (RMSE) of the examined control loop is also calculated (assuming the same parameters of the controller) in the case of tracking of the previous setpoints 1 to 5. The results are summarized in Table 1. It can be seen that the transient characteristics of the control scheme are also quite satisfactory.

Table 1 RMSE of Diesel engine’s state variables

Conclusions

The paper has examined the use of differential flatness theory as the basis of adaptive fuzzy control of turbocharged diesel engines. The development of embedded control for diesel engines exhibits particular difficulties, such as the engine’s nonlinear dynamics, uncertainties and disturbances affecting the engine’s model and the difficulty in measuring specific elements of the engine’s state vector. In particular, the engine’s model does not admit static feedback linearization and this increases the degree of difficulty of this nonlinear control problem. To handle this, it has been proposed to apply dynamic feedback linearization which is based on extending the state-space description of the engine with the inclusion of additional state variables representing the derivatives of the control inputs.

It has been shown that the extended state-space model of the turbocharged diesel engine satisfies differential flatness properties and can be finally transformed into the MIMO canonical (Brunovsky) form. The latter description facilitates the design of a state feedback controller that assures that the elements of the state vector of the engine will converge asymptotically to the desirable setpoints. For the case that there is no prior knowledge about the diesel engine dynamics adaptive fuzzy control can be implemented. After applying the transform that was based on differential flatness theory the MIMO system was written into the canonical form. The resulting control inputs were shown to contain nonlinear elements which depend on the system’s parameters. Since the parameters of the system were unknown, then the nonlinear terms which appear in the control inputs had to be approximated with the use of neuro-fuzzy networks. Moreover, since only the system’s output is measurable the complete state vector had to be reconstructed with the use of a state observer. It has been shown that a suitable learning law can be defined for the aforementioned neuro-fuzzy approximators so as to preserve the closed-loop system stability. With the use of Lyapunov stability analysis it has also been proven that the proposed observer-based adaptive fuzzy control scheme results in \(H_{\infty }\) tracking performance.For the design of the observer-based adaptive fuzzy controller one had to solve two Riccati equations, where the first one was associated with the controller and the second one was associated with the observer.

The presented case study on observer-based adaptive fuzzy control system shows that it is possible to apply indirect adaptive fuzzy control also to systems that admit dynamic feedback linearization. This is particularly important for the design of MIMO controllers, capable of efficiently compensating for modeling uncertainties and external disturbances in such a class of nonlinear dynamical systems. Unlike other adaptive fuzzy control schemes which are based on several assumptions about the structure of the nonlinear system as well as about the uncertainty characterizing the system’s model, the proposed adaptive fuzzy control scheme based on differential flatness theory offers an exact solution to the design of fuzzy controllers for unknown dynamical systems. Besides, it enables control of MIMO nonlinear systems without the need to measure all state vector elements. The only assumption needed for the design of the controller and for succeeding \(H_{\infty }\) tracking performance for the control loop is that there exists a solution for two Riccati equations associated with the linearized error dynamics of the differentially flat model. This assumption is quite reasonable for several nonlinear systems, thus providing a systematic approach to the design of reliable controllers for such systems.