1 Introduction

As practical engineering systems become increasingly massive and complex, abrupt variations in parameters or structures often occur in the systems due to the factors such as component failures, network constraints and changes of the external environment [1]. Traditional linear systems are not applicable to describe such systems accurately. Fortunately, Markov jump systems (MJSs), as a class of important stochastic hybrid systems, have emerged. Due to the powerful modeling capability, MJSs have been concerned by more and more scholars [2, 3] and widely applied in many areas. For example, MJSs were applied to the highway traffic system [4]. The transient faults on power line were modeled as Markov chain, and the power systems were described as MJSs [5]. Furthermore, MJSs have also been widely used in economic, communication, aerospace and other fields [6,7,8].

With the combination of computer technology, communication technology and control theory, networked control systems (NCSs) are developing rapidly due to the advantages of less wiring, high reliability and information sharing. However, the introduction of network also produces some new issues, e.g., time delay and packet dropouts. To solving these problems, a variety of approaches have been proposed [9,10,11,12,13,14,15,16,17]. In the case of packet dropouts, the works [9,10,11] investigated the fuzzy control of nonlinear MJSs, the \({H_\infty }\) filtering of networked nonlinear discrete-time systems and the output feedback control of NCSs, respectively. As for the study of networked MJSs with time delay [12,13,14,15,16], the issue of state estimation based on sliding mode observer was addressed [13]. The stability of delayed MJSs with infinite states was analyzed, and a sufficient condition for the mean square stability of the system was given in the form of linear matrix inequalities (LMIs) [15].

On the other hand, issues like time delay and packet dropouts will lead to inadequate information transmission between different nodes. In MJSs, inadequate information transmission will inevitably cause asynchronous phenomenon. However, in most of the available works [18,19,20,21,22], it is assumed that the controller is mode-independent or perfectly synchronous with the physical plant. The former ignores mode information, which is simple in structure and easy to implement, but causes a great deal of conservatism. The latter needs the controller to obtain the mode information accurately and timely, which is difficult to achieve in practical systems. Recently, the asynchronous problem of MJSs has attracted more and more attention. The main asynchronous modeling approaches include time-delay model [23], piecewise homogeneous Markov chain model [24] and hidden Markov model (HMM) [25,26,27,28,29]. Based on HMM, an asynchronous state feedback \({H_\infty }\) controller for time-delay MJSs was designed [26], and the asynchronous output tracking control with incomplete premise matching for T-S fuzzy MJSs was investigated [29]. In addition, dissipative theory has been studied for a long time, which means that the storage of energy is less than the supply of energy; that is, there is energy dissipation in the system. In recent years, scholars have conducted a lot of research on dissipative control [12, 30,31,32]. Based on dissipative theory, the stability of sampled-data MJSs was studied [30], and the problem of sliding mode control for nonlinear MJSs with external disturbances was investigated [31]. For incremental dissipative control for nonlinear stochastic MJSs, by using incremental Hamilton–Jacobi inequalities, the sufficient conditions for the random incremental dissipative property of nonlinear stochastic MJSs were given [32]. However, the current research on asynchronous dissipative control of MJSs is still insufficient, which is one of the motives of this work.

It is noted that the data transmission in the above works is based on periodic-triggered scheme, and the sampling and transmission of system signals are in a fixed period, which can easily lead to network congestion, packet dropouts and other problems. However, the event-triggered scheme has been considered as an efficient method to overcome these obstacles [33,34,35]. The event-triggered scheme indicates that the system signals determine whether to transmit or not depending on the event-triggered criterion, which can reduce the communication consumption effectively. The event-triggered risk-sensitive state estimation problem for HMMs was studied; by using the reference probability measure method, the estimation problem is transformed into an equivalent estimation problem and solved [36]. In order to solve the issue of the event-triggered self-adaptive control for random nonlinear MJSs, two self-adaptive controllers with fixed threshold scheme and relative threshold scheme were proposed [37]. Moreover, an event-triggered output feedback control strategy was proposed for networked MJSs with partially unknown transition probabilities [38], and an event-triggered sliding mode control strategy was considered for discrete-time MJSs [39]. Nevertheless, there are few works about dissipative asynchronous control of time-delay MJSs with event-triggered scheme and packet dropouts, which is another motive of this work.

This paper focuses on the design of asynchronous dissipative controller for networked time-delay MJSs with event-triggered scheme and packet dropouts. The main contributions of this paper are as follows:

1. Compared with the literature [26], this paper considers not only the varying delay of the physical plant, but also the network-induced communication constraints such as limited bandwidth and packet dropouts, which is more practical.

2. The asynchronous dissipative control scheme proposed in this paper provides a unified framework, in which the asynchronous strategy based on HMM contains two special cases: mode independence and synchronization, which are studied most in the existing literature; at the same time, the dissipative control problem also includes \({H_\infty }\) control and passive control.

The remaining structure of this paper is as follows: Sect. 2.1 gives the problem description and system modeling. In Sect. 2.2, a sufficient condition for the closed-loop control system to be stochastically stable and strictly dissipative is obtained. A controller design method is given in Sect. 2.3. A simulation example and results and discussion are provided in Sect. 3. Section 4 draws the conclusion.

2 Methods

2.1 Problem description

Fig. 1
figure 1

Event-triggered asynchronous control with packet dropouts

This work focuses on the design of the asynchronous dissipative controller for networked time-delay MJSs with packet dropouts under the event-triggered scheme, as shown in Fig. 1. Consider the networked time-delay MJSs as follows:

$$\begin{aligned} \left\{ {\begin{array}{*{20}{l}} {{x_{k + 1}} = {A_{{\partial _k}}}{x_k} + {A_{d{\partial _k}}}{x_{k - {d_k}}} + {B_{1{\partial _k}}}{u_k} + {D_{1{\partial _k}}}{w_k}}\\ {{y_k} = {C_{{\partial _k}}}{x_k} + {C_{d{\partial _k}}}{x_{k - {d_k}}} + {B_{2{\partial _k}}}{u_k} + {D_{2{\partial _k}}}{w_k}}\\ {{x_{{k_0}}} = {\chi _{{k_0}}},{k_0} = - {d_2}, - {d_2} + 1, \cdots , - 1,0} \end{array}} \right. \end{aligned}$$
(1)

where \({x_k} \in {{\mathbb {R}}^{{n_x}}}\), \({\chi _{{k_0}}}\), \({y_k} \in {{\mathbb {R}}^{{n_y}}}\), \({u_k} \in {{\mathbb {R}}^{{n_u}}}\), \({w_k} \in {{\mathbb {R}}^{{n_w}}}\)(\({w_k} \in l[0,\infty )\)) denote the system state, the system initial state, the controlled output, the control input and the disturbance input, respectively. There exists time delay \({d_k} \in {{\mathbb {N}}^ + }\) in the system (1) with the upper bound \({d_1}\) and the lower bound \({d_2}\), \({d_1} < {d_2}\). (\({A_{{\partial _k}}}\), \({A_{d{\partial _k}}}\), \({B_{1{\partial _k}}}\), \({B_{2{\partial _k}}}\), \({C_{{\partial _k}}}\), \({C_{d{\partial _k}}}\), \({D_{1{\partial _k}}}\), \({D_{2{\partial _k}}}\)) are the given real matrices with appropriate dimensions. The Markov jump process of system (1) is controlled by the mode parameter \({\partial _k}\) (\({\partial _k} \in \mathrm{{S}}\), \(\mathrm {S} = \{ 1,2, \cdots ,s\}\)) and complies with the transition probability matrix (TPM) \(\Upsilon = [{\pi _{pq}}]\), in which the transition probability(TP) \({\pi _{pq}}\) is defined as follows:

$$\begin{aligned} \Pr \{ {\partial _{k + 1}} = q|{\partial _k} = p\} = {\pi _{pq}} \end{aligned}$$
(2)

obviously, \({\pi _{pq}} \in [0,1]\) and \(\sum \nolimits _{q = 1}^s {{\pi _{pq}} = 1}\) for \(\forall p,q \in \mathrm{{S}}\).

Considering the limited bandwidth and energy in the system, we will introduce an event-trigger to reduce the transmission rate of sampling signals and relieve the communication pressure. The event-triggered mechanism is as follows:

$$\begin{aligned} {\hat{x}_{ik}} = \left\{ {\begin{array}{*{20}{c}} {{x_{ik}}}\\ {{{\hat{x}}_{i(k - 1)}}} \end{array}} \right. \begin{array}{*{20}{c}} {}&{} \end{array}\begin{array}{*{20}{c}} {|{{\hat{x}}_{i(k - 1)}} - {x_{ik}}| > {\delta _i}|{x_{ik}}|}\\ {|{{\hat{x}}_{i(k - 1)}} - {x_{ik}}| \le {\delta _i}|{x_{ik}}|} \end{array} \end{aligned}$$
(3)

where \({\delta _i} \in \left[ {0,1} \right]\) is error threshold. Denote \({H_k} = diag\{ {\Delta _{1k}},{\Delta _{2k}}, \cdots ,{\Delta _{{n_x}k}}\}\), \({\Delta _{ik}} \in [ - {\delta _i},{\delta _i}],i = 1,2, \cdots ,{n_x}\). The sampling signal is transmitted to the controller only when the event-triggered condition is satisfied. Then according to (3), we can obtain

$$\begin{aligned} {\hat{x}_k} = (I + {H_k}){x_k} \end{aligned}$$
(4)

Remark 1

Owing to the introduction of the event-triggered transmission scheme, the sampling signal does not need to be transmitted periodically, thus gaining the aim of reducing the data transmission frequency. In addition, we introduce a performance index of data transmission performance \(DTP = {t_S}/{t_T} \times 100\%\) to represent the communication performance [40], in which \({t_S}\) and \({t_T}\) indicate the transmission times of sampled data with and without the event-triggered mechanism, respectively.

Based on the output of the trigger (4), we will adopt the following asynchronous controller:

$$\begin{aligned} {u_k} = {K_{{\sigma _k}}}{\hat{x}_k} \end{aligned}$$
(5)

where \({K_{{\sigma _k}}}\) represents the controller gain, and \({\sigma _k}\) is the mode of the controller. The mode \({\partial _k}\) of system (1) affects the mode \({\sigma _k}\) of the controller through the conditional transition matrix(CPM) \(\Psi = [{\theta _{pj}}]\), and its conditional transition (CP) \({\theta _{pj}}\) is described as follows [41]:

$$\begin{aligned} \Pr \{ {\sigma _k} = j|{\partial _k} = p\} = {\theta _{pj}} \end{aligned}$$
(6)

which denotes the controller works in mode j when the system (1) is in mode p, in which \({\theta _{pj}} \in [0,1]\) and \(\sum \nolimits _{j = 1}^s {{\theta _{pj}}} = 1\) for \(\forall p,j \in S\).

Remark 2

The controller and the physical plant (1) are mode asynchronous, and the asynchronous relationship is described by an HMM, which can well describe the asynchronization and connection between the controller and the plant through CPM. The controller’s mode is influenced by the plant’s mode, and their asynchronous level is reflected by the CPs. In addition, the asynchronous controller under the HMM scheme is more general, which covers both synchronous (i.e., \(\Psi \mathrm{{ = }}I\)) and mode-independent (i.e., \({\sigma _k} \in \{ 1\}\)) cases [42].

Considering that there exists network between the controller and the actuator, packet dropouts are inevitable. A Bernoulli stochastic process is used in this work to describe the packet dropout process:

$$\begin{aligned} {\hat{u}_k} = {\beta _k}{u_k} \end{aligned}$$
(7)

where \({\beta _k}\) denotes the Bernoulli process with

$$\begin{aligned} \Pr \{ {\beta _k} = 1\} = \beta , \Pr \{ {\beta _k} = 0\} = 1 - \beta \end{aligned}$$
(8)

and satisfies

$$\begin{aligned} \mathrm{E}\{ {\beta _k}\} = \beta , \mathrm{E}\{ \beta _k^2\} = \beta \end{aligned}$$
(9)

Furthermore, to facilitate subsequent derivations. Defining \({{\bar{\beta }} _k} = {\beta _k} - \beta\), we obtain

$$\begin{aligned} \mathrm{E}\{ {{\bar{\beta }} _k}\} = 0, \mathrm{E}\{ {\bar{\beta }} _k^2\} = {{\bar{\beta }} ^2} \end{aligned}$$
(10)

where \({\bar{\beta }} = \sqrt{\beta - {\beta ^2}}\).

For the convenience of expression, let \({\partial _k} = p\), \({\partial _{k + 1}} = q\) and \({\sigma _k} = j\). According to (1), (4) and (7), we can obtain the following closed-loop dynamic system:

$$\begin{aligned} \left\{ {\begin{array}{*{20}{l}} {{x_{k + 1}} = {{\bar{A}}_{pjk}}{x_k} + {A_{dp}}{x_{k - d_{k}}} + {D_{1p}}{w_k}}\\ {{y_k} = {{\bar{C}}_{pjk}}{x_k} + {C_{dp}}{x_{k - d _{k}}} + {D_{2p}}{w_k}} \end{array}} \right. \end{aligned}$$
(11)

where \({\bar{A}_{pjk}} = {A_p} + {\beta _k}{B_{1p}}{K_j}(I+{H_k})\), \({\bar{C}_{pjk}} = {C_p} + {\beta _k}{B_{2p}}{K_j}(I+{H_k})\).

Next, some important lemmas and definitions are introduced to facilitate the work of this paper.

Definition 2.1

[43] The system (11) is stochastically stable, if \({w_k} \equiv 0\) and for any initial condition \(({x_0},{\partial _0})\), satisfying

$$\begin{aligned} \mathrm{E} \left\{ \sum \limits _{k = 0}^\infty ||x_k||^2|x_0,\partial _0 \right\} < \infty \end{aligned}$$
(12)

Definition 2.2

[43] Given a constant \(\gamma > 0\), matrices \(\mu \le 0\), \(\vartheta\) and symmetric \(\upsilon\), the closed-loop system (11) is considered to be strictly \((\mu ,\vartheta ,\upsilon ) - \gamma -\)dissipative, for any positive integer N, when \({w_k} \in l[0,\infty )\) and under 0 initial condition, satisfying

$$\begin{aligned} \sum \limits _{k = 0}^N {\mathrm{E}\{ F({w_k},{y_k})\} \ge \gamma \sum \limits _{k = 0}^N {w_k^{\mathrm{{T}}}{w_k}} } \end{aligned}$$
(13)

where \(F({w_k},{y_k}) = y_k^{\mathrm{{T}}}\mu {y_k} + 2y_k^{\mathrm{{T}}}\vartheta {w_k} + w_k^{\mathrm{{T}}}\upsilon {w_k}\), and \(\mu \buildrel \Delta \over = - U_1^{\mathrm{{T}}}{U_1}\) is negative semi-definite.

Lemma 2.1

[44] Given the matrices A, B, C and \({A^{\mathrm{{T}}}} = A\), then

$$\begin{aligned} A + CB + {C^{\mathrm{{T}}}}{B^{\mathrm{{T}}}} < 0 \end{aligned}$$
(14)

holds if there exists a matrix \(D > 0\), satisfying

$$\begin{aligned} A + C{D^{ - 1}}{C^{\mathrm{{T}}}} + {B^{\mathrm{{T}}}}DB < 0 \end{aligned}$$
(15)

2.2 Stability and dissipativity analysis of the system

This section will derive a sufficient condition to ensure that the system (11) is stochastically stable and strictly \((\mu ,\vartheta ,\upsilon ) - \gamma -\)dissipative.

Theorem 2.1

The system (11) is stochastically stable and strictly \((\mu ,\vartheta ,\upsilon ) - \gamma -\)dissipative, if there exist a matrix \({K_j} \in {{\mathbb {R}}^{{n_u} \times {n_x}}}\), positive matrices \({P_p} \in {{\mathbb {R}}^{{n_x} \times {n_x}}}\), \({Q} \in {{\mathbb {R}}^{{n_x} \times {n_x}}}\), \({R_{pj}} \in {{\mathbb {R}}^{{n_x} \times {n_x}}}\) and a positive diagonal matrix \({G_{pj}} \in {{\mathbb {R}}^{{n_u} \times {n_u}}}\), for \(\forall p,j \in \mathrm{{S}}\), satisfying

$$\begin{aligned} \sum \limits _{j = 1}^s {{\theta _{pj}}{R_{pj}} < {P_p}} \end{aligned}$$
(16)
$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} {{\Pi _{pj}}}&{}{{N_{pj}}}&{}{L^{\mathrm{{T}}}\Lambda {G_{pj}}}\\ *&{}{ - {G_{pj}}}&{}0\\ *&{}*&{}{ - {G_{pj}}} \end{array}} \right] < 0 \end{aligned}$$
(17)

where

\({\Pi _{pj}} = \left[ {\begin{array}{*{20}{c}} { - \bar{P}_p^{ - 1}}&{}0&{}0&{}0&{}{\bar{A}_{pj}^*}&{}{{A_{dp}}}&{}{{D_{1p}}}\\ *&{}{ - \bar{P}_p^{ - 1}}&{}0&{}0&{}{{\bar{\beta }} {{\bar{B}}_{1pj}}}&{}0&{}0\\ *&{}*&{}{ - I}&{}0&{}{{U_1}\bar{C}_{pj}^*}&{}{{U_1}{C_{dp}}}&{}{{U_1}{D_{2p}}}\\ *&{}*&{}*&{}{ - I}&{}{{\bar{\beta }} {U_1}{{\bar{B}}_{2pj}}}&{}0&{}0\\ *&{}*&{}*&{}*&{}{dQ - {R_{pj}}}&{}0&{}{ - \bar{C}_{pj}^{*\mathrm{{T}}}\vartheta }\\ *&{}*&{}*&{}*&{}*&{}{ - Q}&{}{ - C_{dp}^{\mathrm{{T}}}\vartheta } \\ *&{}*&{}*&{}*&{}*&{}*&{}M_p \end{array}} \right]\),

\({N_{pj}} = {\left[ {\begin{array}{*{20}{c}} {\beta \bar{B}_{1pj}^{\mathrm{{T}}}}&{{\bar{\beta }} \bar{B}_{1pj}^{\mathrm{{T}}}}&{\beta \bar{B}_{2pj}^{\mathrm{{T}}}}&{{\bar{\beta }} \bar{B}_{2pj}^{\mathrm{{T}}}}&0&0&{\beta {{({\vartheta ^{\mathrm{{T}}}}{B_{2pj}}{K_j})}^{\mathrm{{T}}}}} \end{array}} \right] ^{\mathrm{{T}}}}\),

\({L} = \left[ {\begin{array}{*{20}{c}} 0&0&0&0&I&0&0 \end{array}} \right]\), \(\bar{A}_{pj}^* = {A_p} + \beta {B_{1p}}{K_j}\), \({\bar{B}_{1pj}} = {B_{1p}}{K_j}\),

\(\bar{C}_{pj}^* = {C_p} + \beta {B_{2p}}{K_j}\), \({\bar{B}_{2pj}} = {B_{2p}}{K_j}\), \(\Lambda = diag\{ {\delta _1},{\delta _2}, \cdots ,{\delta _{{n_x}}}\}\),

\(M_p = - D_{2p}^{\mathrm{{T}}}\vartheta - {\vartheta ^{\mathrm{{T}}}}{D_{2p}} + \gamma I - \upsilon\), \({\bar{P}_p} = \sum \limits _{q = 1}^s {{\pi _{pq}}{P_q}}\), \(d = {d_2} - {d_1} + 1\).

Proof

First, using Schur complement to (17), we obtain that

$$\begin{aligned} {\Pi _{pj}} + N_{pj}^{\mathrm{{T}}}G_{pj}^{ - 1}{N_{pj}} + {L}\Lambda {G_{pj}}\Lambda L^{\mathrm{{T}}} < 0 \end{aligned}$$
(18)

By Lemma 2.1, we have

$$\begin{aligned} {{\bar{\Pi }} _{pjk}} \buildrel \Delta \over = {\Pi _{pj}} + N_{pj}^{\mathrm{{T}}}{H_k}L^{\mathrm{{T}}} + {L}{H_k}{N_{pj}} < 0 \end{aligned}$$
(19)

and thus obtain

$$\begin{aligned} {{\bar{\Pi }} _{pjk}} = \left[ {\begin{array}{*{20}{c}} { - \bar{P}_p^{ - 1}}&{}0&{}0&{}0&{}{{{\tilde{A}}_{pjk}}}&{}{{A_{dp}}}&{}{{D_{1p}}}\\ *&{}{ - \bar{P}_p^{ - 1}}&{}0&{}0&{}{{\bar{\beta }} {{\tilde{B}}_{1pjk}}}&{}0&{}0\\ *&{}*&{}{ - I}&{}0&{}{{U_1}{{\tilde{C}}_{pjk}}}&{}{{U_1}{C_{dp}}}&{}{{U_1}{D_{2p}}}\\ *&{}*&{}*&{}{ - I}&{}{{\bar{\beta }} {U_1}{{\tilde{B}}_{2pjk}}}&{}0&{}0\\ *&{}*&{}*&{}*&{}{dQ - {R_{pj}}}&{}0&{}{ - \tilde{C}_{pjk}^{\mathrm{{T}}}\vartheta }\\ *&{}*&{}*&{}*&{}*&{}{ - Q}&{}{ - C_{dp}^{\mathrm{{T}}}\vartheta }\\ *&{}*&{}*&{}*&{}*&{}*&{}M_p \end{array}} \right] < 0 \end{aligned}$$
(20)

where

\({\tilde{A}_{pjk}} = {A_p} + \beta {B_{1p}}{K_j}(I + {H_k})\), \({\tilde{B}_{1pjk}} = {B_{1p}}{K_j}(I + {H_k})\),

\({\tilde{C}_{pjk}} = {C_p} + \beta {B_{2p}}{K_j}(I + {H_k})\), \({\tilde{B}_{2pjk}} = {B_{2p}}{K_j}(I + {H_k})\).

which implies

$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} { - \bar{P}_p^{ - 1}}&{}0&{}{{{\tilde{A}}_{pjk}}}&{}{{A_{dp}}}\\ *&{}{ - \bar{P}_p^{ - 1}}&{}{{\bar{\beta }} {{\tilde{B}}_{1pjk}}}&{}0\\ *&{}*&{}{dQ - {R_{pj}}}&{}0\\ *&{}*&{}*&{}{ - Q} \end{array}} \right] < 0 \end{aligned}$$
(21)

Then, using Schur complement to (21) and (20) again, we have

(22)

where

figure a

\({{\bar{\Gamma }}_p ^2} = diag\{ - \bar{P}_p, - \bar{P}_p, - I, - I\}\), \({\Gamma ^1} = \left[ {\begin{array}{*{20}{c}} {dQ}&{}0\\ *&{}{ - Q} \end{array}} \right]\), \({\phi _{pjk}} = \left[ {\begin{array}{*{20}{c}} {{{\tilde{A}}_{pjk}}}&{}{{A_{dp}}}\\ {{\bar{\beta }} {{\tilde{B}}_{1pjk}}}&{}0 \end{array}} \right]\),

\(\Gamma _{pjk}^2 = \left[ {\begin{array}{*{20}{c}} {dQ}&{}0&{}{ - \tilde{C}_{pjk}^{\mathrm{{T}}}\vartheta }\\ *&{}Q&{}{ - C_{dp}^{\mathrm{{T}}}\vartheta }\\ *&{}*&{}{{M_p}} \end{array}} \right]\), \({\varphi _{pjk}} = \left[ {\begin{array}{*{20}{c}} {{{\tilde{A}}_{pjk}}}&{}{{A_{dp}}}&{}{{D_{1p}}}\\ {{\bar{\beta }} {{\tilde{B}}_{1pjk}}}&{}0&{}0\\ {{U_1}{{\tilde{C}}_{pjk}}}&{}{{U_1}{C_{dp}}}&{}{{U_1}{D_{2p}}}\\ {{\bar{\beta }} {U_1}{{\tilde{B}}_{2pjk}}}&{}0&{}0 \end{array}} \right]\).

Next, choose the mode-dependent Lyapunov–Krasovskii function as follows:

$$\begin{aligned} {V_k} = \sum \limits _{l = 1}^2 {{V_{lk}}} \end{aligned}$$
(23)

where \({V_{1k}} = x_k^{\mathrm{{T}}}{P_{{\partial _k}}}{x_k}\), \({V_{2k}} = \sum \limits _{b = - {d_2} + 1}^{ - {d_1} + 1} {\sum \limits _{a = k - 1 + b}^{k - 1} {x_a^{\mathrm{{T}}}} } Q{x_a}\). We introduce \({\xi _{1k}} = {\left[ {\begin{array}{*{20}{c}} {x_k^{\mathrm{{T}}}}&{x_{k - d_k}^{\mathrm{{T}}}} \end{array}} \right] ^{\mathrm{{T}}}}\) and \({\xi _k} = {\left[ {\begin{array}{*{20}{c}} {\xi _{1k}^{\mathrm{{T}}}}&{w_k^{\mathrm{{T}}}} \end{array}} \right] ^{\mathrm{{T}}}}\). Denoting \(\nabla {V_k}\) as the forward differential of \({V_k}\),

$$\begin{aligned} \mathrm{E}\{ \nabla {V_k}\} = \mathrm{E}\{ \nabla {V_{1k}}\} + \mathrm{E}\{ \nabla {V_{2k}}\} \end{aligned}$$
(24)

Then, figure out \(\mathrm{E}\{ \nabla {V_{1k}}\}\) and \(\mathrm{E}\{ \nabla {V_{2k}}\}\)

$$\begin{aligned} \begin{array}{l} \mathrm{E}\{ \nabla {V_{1k}}\} = \left\{ {{V_{1(k + 1)}} - {V_{1k}}|{x_k},{\partial _k} = p} \right\} \\ \qquad = \mathrm{E}\{ x_{k + 1}^{\mathrm{{T}}}{P_q}{x_{k + 1}} - x_k^{\mathrm{{T}}}{P_p}{x_k}\} \\ \qquad = \mathrm{E}\{ \sum \limits _{j = 1}^s {\sum \limits _{q = 1}^s {{\theta _{pj}}{\pi _{pq}}x_{k + 1}^{\mathrm{{T}}}{P_q}{x_{k + 1}} - x_k^{\mathrm{{T}}}{P_p}{x_k}} } \} \\ \qquad = \mathrm{E}\{ \sum \limits _{j = 1}^s {{\theta _{pj}}{\xi _k^{\mathrm{{T}}}}\left[ {\begin{array}{*{20}{c}} {\phi _{pjk}^{\mathrm{{T}}}}\\ {D_{1p}^{\mathrm{{T}}}} \end{array}} \right] {{\bar{P}}_p}\left[ {\begin{array}{*{20}{c}} {{\phi _{pjk}}}&{{D_{1p}}} \end{array}} \right] {\xi _k} - x_k^{\mathrm{{T}}}{P_p}{x_k}} \} \end{array} \end{aligned}$$
(25)
$$\begin{aligned} \begin{array}{l} \mathrm{E}\{ \nabla {V_{2k}}\} = \mathrm{E}\{ {V_{2(k + 1)}} - {V_{2k}}\} \\ \qquad = \mathrm{E}\{ \sum \limits _{b = - {d_2} + 1}^{ - {d_1} + 1} {\sum \limits _{a = k + b}^k {x_a^{\mathrm{{T}}}} } Q{x_a} - \sum \limits _{b = - {d_2} + 1}^{ - {d_1} + 1} {\sum \limits _{a = k - 1 + b}^{k - 1} {x_a^{\mathrm{{T}}}} } Q{x_a}\} \\ \qquad = \mathrm{E}\{ \sum \limits _{b = - {d_2} + 1}^{ - {d_1} + 1} {\{ x_k^{\mathrm{{T}}}Q{x_k} - x_{k - 1 + b}^{\mathrm{{T}}}Q{x_{k - 1 + b}}\} } \} \\ \le \mathrm{E}\{ x_k^{\mathrm{{T}}}dQ{x_k} - x_{k - d_{k}}^{\mathrm{{T}}}Q{x_{k - d_{k}}}\} \\ \qquad = \mathrm{E}\{ \xi _{1k}^{\mathrm{{T}}}{\Gamma ^1}{\xi _{1k}}\} \end{array} \end{aligned}$$
(26)

where “\(\le\)” is obtained from (27) and (28).

$$\begin{aligned}&\sum \limits _{b = - {d_2} + 1}^{ - {d_1} + 1} {x_k^{\mathrm{{T}}}Q{x_k}} = x_k^{\mathrm{{T}}}dQ{x_k} \end{aligned}$$
(27)
$$\begin{aligned}&\sum \limits _{b = - {d_2} + 1}^{ - {d_1} + 1} {x_{k - 1 + b}^{\mathrm{{T}}}Q{x_{k - 1 + b}}} = \sum \limits _{b = k - {d_2}}^{k - {d_1}} {x_b^{\mathrm{{T}}}Q{x_b}} \ge x_{k - d_{k}}^{\mathrm{{T}}}Q{x_{k - d_{k}}} \end{aligned}$$
(28)

Noting that \({w_k} \equiv 0\) in the definition of stochastic stability, and combining (24), (25) and (26), we can get

$$\begin{aligned} \begin{array}{l} \mathrm{E}\{ \nabla {V_k}\} = \mathrm{E}\{ \nabla {V_{1k}}\} + \mathrm{E}\{ \nabla {V_{2k}}\} \\ \le \mathrm{E}\{ \sum \limits _{j = 1}^s {{\theta _{pj}}\xi _{1k}^{\mathrm{{T}}}{\Omega _{1pjk}}{\xi _{1k}} - x_k^{\mathrm{{T}}}{P_p}{x_k}} \} \\ < \mathrm{E}\{ x_k^{\mathrm{{T}}}(\sum \limits _{j = 1}^s {{\theta _{pj}}{R_{pj}} - {P_p})} {x_k}\} \\ \le \varpi \mathrm{E}\{ x_k^{\mathrm{{T}}}{x_k}\} \end{array} \end{aligned}$$
(29)

where “<” is obtained from (22), and \(\varpi = {\lambda _{\max }}(\sum \limits _{j = 1}^s {{\theta _{pj}}{R_{pj}} - {P_p}} )\).

Accordingly,

$$\begin{aligned} \mathrm{E}\{ \sum \limits _0^\infty {\nabla {V_k}} \} = \mathrm{E}\{ {V_\infty } - {V_0}\} \le \varpi \mathrm{E}\{ \sum \limits _0^\infty {x_k^{\mathrm{{T}}}{x_k}} \} \end{aligned}$$
(30)

We know that \(\varpi < 0\) from (16); hence,

$$\begin{aligned} \mathrm{E}\left\{ \sum \limits _0^\infty {x_k^{\mathrm{{T}}}{x_k}} \right\} < \infty \end{aligned}$$
(31)

which conforms to definition 2.1; namely, the stochastic stability of the system (11) is proved.

Next, we will show that the system (11) is strictly \((\mu ,\vartheta ,\upsilon ) - \gamma -\)dissipative. Define the performance index as

$$\begin{aligned} \begin{array}{l} J = \sum \limits _{k = 0}^\infty {\mathrm{E}\{ w_k^{\mathrm{{T}}}(\gamma I - \upsilon ){y_k} - y_k^{\mathrm{{T}}}\mu {y_k} - 2y_k^{\mathrm{{T}}}\vartheta {w_k}\} } \\ \le \sum \limits _{k = 0}^\infty {\mathrm{E}\{ } w_k^{\mathrm{{T}}}(\gamma I - \upsilon ){y_k} - y_k^{\mathrm{{T}}}\mu {y_k} - 2y_k^{\mathrm{{T}}}\vartheta {w_k} + \nabla {V_k}\} \end{array} \end{aligned}$$
(32)

Then, calculate \(\mathrm{E}\{ \nabla {V_{1k}}\}\) and \(\mathrm{E}\{ \nabla {V_{2k}}\}\), respectively,

$$\begin{aligned} \mathrm{E}\left\{ {\nabla {V_{1k}}} \right\}= & {} \mathrm{E}\left\{ {\sum \limits _{j = 1}^s {{\theta _{pj}}x_{k + 1}^{\mathrm{{T}}}{{\bar{P}}_p}{x_{k + 1}} - x_k^{\mathrm{{T}}}{P_p}{x_k}} } \right\} \end{aligned}$$
(33)
$$\begin{aligned} \mathrm{E}\left\{ {\nabla {V_{2k}}} \right\}= & {} \mathrm{E}\left\{ {\sum \limits _{j = 1}^s {{\theta _{pj}}{\xi _{k} ^{\mathrm{{T}}}}diag\{ {\Gamma ^1},0\} \xi _{k} } } \right\} \end{aligned}$$
(34)

Combining (32), (33) and (34), we can obtain

$$\begin{aligned} \begin{array}{l} J \le \sum \limits _{k = 0}^\infty {\mathrm{E}\{ \sum \limits _{j = 1}^s {{\theta _{pj}}{\xi _k ^{\mathrm{{T}}}}{\Omega _{2pjk}}\xi _k - x_k^{\mathrm{{T}}}{P_p}{x_k}\} } } \\< \sum \limits _{k = 0}^\infty {\mathrm{E}\{ x_k^{\mathrm{{T}}}(\sum \limits _{j = 1}^s {{\theta _{pj}}{R_{pj}} - {P_p}} )} {x_k}\} \\ < 0 \end{array} \end{aligned}$$
(35)

where the two “<” are obtained from (22) and (16), respectively. By definition 2.2, we can know that the system (11) is strictly \((\mu ,\vartheta ,\upsilon ) - \gamma -\)dissipative. Thus, the proof is completed. \(\square\)

Remark 3

A sufficient condition for stochastic stability and strict dissipativity is derived for the system (11) in Theorem 2.1. However, in terms of existing nonlinear term in Theorem 2.1, we cannot parameterize the controller gain directly by conditions (16) and (17); therefore, further linearization is required.

2.3 Design of the event-triggered asynchronous controller

This section will provide a design method for an event-triggered asynchronous controller and further determine the controller gains.

Theorem 2.2

The system (11) is stochastically stable and strictly \((\mu ,\vartheta ,\upsilon ) - \gamma -\)dissipative, if there exist matrices \({\bar{K}_j} \in {{\mathbb {R}}^{{n_u} \times {n_x}}}\), \(V \in {{\mathbb {R}}^{{n_x} \times {n_x}}}\), positive matrices \({\bar{P}_p} \in {{\mathbb {R}}^{{n_x} \times {n_x}}}\), \({\bar{R}_{pj}} \in {{\mathbb {R}}^{{n_x} \times {n_x}}}\), \(\bar{Q} \in {{\mathbb {R}}^{{n_x} \times {n_x}}}\) and a positive diagonal matrix \({\bar{G}_{pj}} \in {{\mathbb {R}}^{{n_u} \times {n_u}}}\), for \(\forall p,j \in \mathrm{{S}}\), satisfying

$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} { - {{\bar{P}}_p}}&{}{{J_p}}\\ *&{}{{{\hat{R}}_p}} \end{array}} \right] < 0 \end{aligned}$$
(36)
$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} {{X_{pj}}}&{}{{Y_{pj}}}&{}{{Z_{pj}}}\\ *&{}{ - I}&{}0\\ *&{}*&{}{{{\hat{P}}}} \end{array}} \right] < 0 \end{aligned}$$
(37)

where

\({J_p} = [\sqrt{{\theta _{p1}}} {\bar{P}_p} \cdots \sqrt{{\theta _{pj}}} {\bar{P}_p} \cdots \sqrt{{\theta _{ps}}} {\bar{P}_p}]\), \({\hat{R}_p} = diag\{ - {\bar{R}_{p1}}, \cdots , - {\bar{R}_{pj}}, \cdots , - {\bar{R}_{ps}}\}\),

\({X_{pj}} = \left[ {\begin{array}{*{20}{c}} {d\bar{Q} + {{\bar{R}}_{pj}} - {V^{\mathrm{{T}}}} - V}&{}0&{}{ - \tilde{C}_{pj}^{*\mathrm{{T}}}\vartheta }&{}0&{}{\Lambda {{\bar{G}}_{pj}}}\\ *&{}{ - \bar{Q}}&{}{ - \bar{C}_{dp}^{\mathrm{{T}}}\vartheta }&{}0&{}0\\ *&{}*&{}M_p&{}{\beta {\vartheta ^{\mathrm{{T}}}}{B_{2p}}{{\bar{K}}_j}}&{}0\\ *&{}*&{}*&{}{ - {{\bar{G}}_{pj}}}&{}0\\ *&{}*&{}*&{}*&{}{ - {{\bar{G}}_{pj}}} \end{array}} \right]\),

\({Y_{pj}}={\left[ {\begin{array}{*{20}{c}} {{U_1}({C_p}V + \beta {B_{2p}}{{\bar{K}}_j})}&{}{{U_1}{C_{dp}}V}&{}{{U_1}{D_{2p}}}&{}{\beta {U_1}{B_{2p}}{{\bar{K}}_j}}&{}0\\ {{\bar{\beta }} {U_1}{B_{2p}}{{\bar{K}}_j}}&{}0&{}0&{}{{\bar{\beta }} {U_1}{B_{2p}}{{\bar{K}}_j}}&{}0 \end{array}} \right] ^{\mathrm{{T}}}}\),

\({Z_{pj}} = \left[ {\begin{array}{*{20}{c}} {\sqrt{{\pi _{p1}}} W_{pj}^{\mathrm{{T}}}}&{\sqrt{{\pi _{p2}}} W_{pj}^{\mathrm{{T}}}}&\cdots&{\sqrt{{\pi _{ps}}} W_{pj}^{\mathrm{{T}}}} \end{array}} \right]\),

\({W_{pj}} = {\left[ {\begin{array}{*{20}{c}} {{A_p}V + \beta {B_{1p}}{{\bar{K}}_j}}&{}{{A_{dp}}V}&{}{{D_{1p}}}&{}{\beta {B_{1p}}{{\bar{K}}_j}}&{}0\\ {{\bar{\beta }} {B_{1p}}{{\bar{K}}_j}}&{}0&{}0&{}{{\bar{\beta }} {B_{1p}}{{\bar{K}}_j}}&{}0 \end{array}} \right] }\),

\(\tilde{C}_{pj}^* = {C_p}V + \beta {B_{2p}}{\bar{K}_j}\), \({\bar{C}_{dp}} = {C_{dp}}V\), \({\hat{P}} = diag\{ - {\bar{P}_1}, - {\bar{P}_2}, \cdots , - {\bar{P}_s}\}\).

and the controller gain \({K_j}\) can be determined by

$$\begin{aligned} {K_j} = {\bar{K} _j}{V^{ - 1}} \end{aligned}$$
(38)

Proof

First, we define

$$\begin{aligned} \begin{array}{c} {{\bar{P}}_p} = P_p^{ - 1},{{\bar{R}}_{pj}} = R_{pj}^{ - 1},{{\bar{K}}_j} = {K_j}V, \bar{Q} = {V^{\mathrm{{T}}}}QV,{{\bar{G}}_{pj}} = {V^{\mathrm{{T}}}}{{ G}_{pj}}V \end{array} \end{aligned}$$
(39)

where V is an invertible slack matrix. Applying a congruence conversion to (36) by \(diag\{ {P_p},I, \cdots ,I\}\), one has

$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} { - {P_p}}&{}{{{\bar{J}}_p}}\\ *&{}{{{\hat{R}}_p}} \end{array}} \right] < 0 \end{aligned}$$
(40)

where \({\bar{J}_p} = [\sqrt{{\theta _{p1}}} I \cdots \sqrt{{\theta _{pj}}} I \cdots \sqrt{{\theta _{ps}}} I]\). Then, (40) is equivalent to (16).

Moreover, due to the fact that

$$\begin{aligned} {({\bar{R}_{pj}} - V)^{\mathrm{{T}}}}\bar{R}_{pj}^{ - 1}({\bar{R}_{pj}} - V) \ge 0 \end{aligned}$$
(41)

namely

$$\begin{aligned} - {V^{\mathrm{{T}}}}\bar{R}_{pj}^{ - 1}V \le {\bar{R}_{pj}} - {V^{\mathrm{{T}}}} - V \end{aligned}$$
(42)

Then, (37) implies

$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} {{{\bar{X}}_{pj}}}&{}{{Y_{pj}}}&{}{{Z_{pj}}}\\ *&{}{ - I}&{}0\\ *&{}*&{}{{{\hat{P}}}} \end{array}} \right] < 0 \end{aligned}$$
(43)

where

\({\bar{X}_{pj}} = \left[ {\begin{array}{*{20}{c}} {d\bar{Q} + {V^{\mathrm{{T}}}}{{\bar{R}}_{pj}}V}&{}0&{}{ - \tilde{C}_{pj}^{*\mathrm{{T}}}\vartheta }&{}0&{}{\Lambda {{\bar{G}}_{pj}}}\\ *&{}{ - \bar{Q}}&{}{ - \bar{C}_{dp}^{\mathrm{{T}}}\vartheta }&{}0&{}0\\ *&{}*&{}M_p&{}{\beta {\vartheta ^{\mathrm{{T}}}}{B_{2p}}{{\bar{K}}_j}}&{}0\\ *&{}*&{}*&{}{ - {{\bar{G}}_{pj}}}&{}0\\ *&{}*&{}*&{}*&{}{ - {{\bar{G}}_{pj}}} \end{array}} \right]\).

Denoting \(\Theta = diag\{ {({V^{\mathrm{{T}}}})^{ - 1}},{({V^{\mathrm{{T}}}})^{ - 1}},I,{({V^{\mathrm{{T}}}})^{ - 1}},{({V^{\mathrm{{T}}}})^{ - 1}},I, \cdots ,I\}\), and applying a congruence conversion to (43) by \(\Theta\) , we can get

$$\begin{aligned} \left[ {\begin{array}{*{20}{c}} {{{\tilde{X}}_{pj}}}&{}{{{\tilde{Y}}_{pj}}}&{}{{{\tilde{Z}}_{pj}}}\\ *&{}{ - I}&{}0\\ *&{}*&{}{{{\hat{P}}}} \end{array}} \right] < 0 \end{aligned}$$
(44)

where

\({X_{pj}} = \left[ {\begin{array}{*{20}{c}} {dQ - {R_{pj}}}&{}0&{}{ - \bar{C}_{pj}^{*\mathrm{{T}}}\vartheta }&{}0&{}{\Lambda {G_{pj}}}\\ *&{}{ - Q}&{}{ - C_{dp}^{\mathrm{{T}}}\vartheta }&{}0&{}0\\ *&{}*&{}M_p&{}{\beta {\vartheta ^{\mathrm{{T}}}}{B_{2p}}{{ K}_j}}&{}0\\ *&{}*&{}*&{}{ - {G_{pj}}}&{}0\\ *&{}*&{}*&{}*&{}{ - {G_{pj}}} \end{array}} \right]\),

\({\tilde{Y}_{pj}} = {\left[ {\begin{array}{*{20}{c}} {{U_1}({C_p} + \beta {B_{2p}}{K_j})}&{}{{U_1}{C_{dp}}}&{}{{U_1}{D_{2p}}}&{}{\beta {U_1}{B_{2p}}{K_j}}&{}0\\ {{\bar{\beta }} {U_1}{B_{2p}}{K_j}}&{}0&{}0&{}{{\bar{\beta }} {U_1}{B_{2p}}{K_j}}&{}0 \end{array}} \right] ^{\mathrm{{T}}}}\),

\({\tilde{Z}_{pj}} = \left[ {\begin{array}{*{20}{c}} {\sqrt{{\pi _{p1}}} \tilde{W}_{pj}^{\mathrm{{T}}}}&{\sqrt{{\pi _{p2}}} \tilde{W}_{pj}^{\mathrm{{T}}}}&\cdots&{\sqrt{{\pi _{ps}}} \tilde{W}_{pj}^{\mathrm{{T}}}} \end{array}} \right]\),

\({\tilde{W}_{pj}} = {\left[ {\begin{array}{*{20}{c}} {{A_p} + \beta {B_{1p}}{K_j}}&{}{{A_{dp}}}&{}{{D_{1p}}}&{}{\beta {B_{1p}}{K_j}}&{}0\\ {{\bar{\beta }} {B_{1p}}{K_j}}&{}0&{}0&{}{{\bar{\beta }} {B_{1p}}{K_j}}&{}0 \end{array}} \right] }\).

By employing Schur complement to (44), we get (17), and the proof is accomplished. \(\square\)

Remark 4

In Theorem 2.1, it is difficult to compute the controller gain due to the nonlinear term. Thus, we introduce the slack matrix V in Theorem 2.2 and transform the nonlinear problem into LMIs by using matrix scaling and slack matrix techniques.

Remark 5

Based on dissipative theory, the larger \(\gamma\) implies, the better dissipative performance. We can obtain the optimal performance \({\gamma ^*}\) by solving a convex optimization problem as follows:

$$\begin{aligned} \left\{ {\begin{array}{*{20}{c}} {\begin{array}{*{20}{c}} {\min }\\ {s.t.} \end{array}}&{}{\begin{array}{*{20}{c}} { - \gamma }\\ {(36),(37)} \end{array}} \end{array}} \right. \end{aligned}$$
(45)

Moreover, it is noted that the dissipative performance also includes two special performances:

(1) \({H_\infty }\): let \(\mu = - I,\vartheta = 0,\upsilon = ({\gamma ^2} + \gamma )I\) in (37).

(2) Passivity: when \({{\mathbb {R}}^{{n_y}}} = {{\mathbb {R}}^{{n_w}}}\), let \(\mu = 0,\vartheta = I,\upsilon = 2\gamma I\) in (37).

Remark 6

Compared with the literature [26], although the same asynchronization method is introduced in this work, there are great differences in the issues of interest, the data transmission mechanism and the system performance. The work [26] mainly considered the \({H_\infty }\) control of MJSs with time delay and quantization. However, we consider not only the time delay, but also the packet dropouts between the controller and the actuator. Considering the limited communication resources, we also introduce an event-triggered mechanism to reduce communication consumption. In addition, the dissipative control we consider is more general, which covers both \({H_\infty }\) control and passive control. In particular, due to the introduction of event-triggered mechanism and packet dropouts, the technical derivation of the design method in this work is more complex.

3 Experimental results and discussion

In this section, a 4-mode robotic arm system [45] will be used to verify the validity of this design method, and the corresponding parameters are as follows:

\({A_1} = \left[ {\begin{array}{*{20}{c}} 1&{}{0.1}\\ { - 0.4905}&{}{0.8} \end{array}} \right]\), \({A_2} = \left[ {\begin{array}{*{20}{c}} 1&{}{0.1}\\ { - 0.4905}&{}{0.96} \end{array}} \right]\), \({A_3} = \left[ {\begin{array}{*{20}{c}} 1&{}{0.1}\\ { - 0.4905}&{}{0.98} \end{array}} \right]\),

\({A_4} = \left[ {\begin{array}{*{20}{c}} 1&{}{0.1}\\ { - 0.4905}&{}{0.9867} \end{array}} \right]\), \({A_{d1}} = \left[ {\begin{array}{*{20}{c}} {0.01}&{}{ - 0.02}\\ { - 0.01}&{}{0.02} \end{array}} \right]\), \({A_{d2}} = \left[ {\begin{array}{*{20}{c}} {0.01}&{}{ - 0.02}\\ {0.01}&{}{0.04} \end{array}} \right]\),

\({A_{d3}} = \left[ {\begin{array}{*{20}{c}} {0.01}&{}{ - 0.02}\\ {0.01}&{}{0.05} \end{array}} \right]\), \({A_{d4}} = \left[ {\begin{array}{*{20}{c}} {0.01}&{}{ - 0.02}\\ {0.01}&{}{0.08} \end{array}} \right]\), \({B_{11}} = \left[ {\begin{array}{*{20}{c}} 0\\ {0.1} \end{array}} \right]\), \({B_{12}} = \left[ {\begin{array}{*{20}{c}} 0\\ {0.02} \end{array}} \right]\),

\({B_{13}} = \left[ {\begin{array}{*{20}{c}} 0\\ {0.01} \end{array}} \right]\), \({B_{14}} = \left[ {\begin{array}{*{20}{c}} 0\\ {0.007} \end{array}} \right]\), \({B_{21}} = {B_{22}} = {B_{23}} = {B_{24}} = 0\),

\({C_1} = {C_2} = {C_3} = {C_4} = \left[ {\begin{array}{*{20}{c}} 1&0 \end{array}} \right]\), \({C_{d1}} = \left[ {\begin{array}{*{20}{c}} {0.1}&{0.01} \end{array}} \right]\), \({C_{d2}} = \left[ {\begin{array}{*{20}{c}} {0.1}&{0.02} \end{array}} \right]\),

\({C_{d3}} = \left[ {\begin{array}{*{20}{c}} {0.1}&{0.05} \end{array}} \right]\), \({C_{d4}} = \left[ {\begin{array}{*{20}{c}} {0.1}&{0.08} \end{array}} \right]\), \({D_{11}} = {D_{12}} = {D_{13}} = {D_{14}} = \left[ {\begin{array}{*{20}{c}} 0\\ {0.1} \end{array}} \right]\),

\({D_{21}} = {D_{22}} = {D_{23}} = {D_{24}} = 0.1\).

Let TPM \(\Upsilon\) and CPM \(\Psi\) as follows:

\(\Upsilon = \left[ {\begin{array}{*{20}{c}} {0.3}&{}{0.2}&{}{0.4}&{}{0.1}\\ {0.4}&{}{0.2}&{}{0.2}&{}{0.2}\\ {0.55}&{}{0.15}&{}{0.3}&{}0\\ {0.1}&{}{0.2}&{}{0.3}&{}{0.4} \end{array}} \right]\), \(\Psi = \left[ {\begin{array}{*{20}{c}} {0.2}&{}{0.25}&{}{0.4}&{}{0.15}\\ {0.1}&{}{0.2}&{}{0.3}&{}{0.4}\\ {0.3}&{}{0.2}&{}{0.4}&{}{0.1}\\ {0.4}&{}{0.2}&{}{0.2}&{}{0.2} \end{array}} \right]\),

respectively. Choose the dissipative parameters \(\mu = - 0.36,\vartheta = - 2,\upsilon = 2\), the parameter \(\beta = 0.9\) and the error threshold \(\delta = 0.15\).

Via solving the LMIs in Theorem 2.2, we get the optimal performance \({\gamma ^*} = 1.4140\), and the controller gain:

\({K_1} = \left[ {\begin{array}{*{20}{c}} { - 24.4130}&{ - 4.4994} \end{array}} \right]\), \({K_2} = \left[ {\begin{array}{*{20}{c}} { - 24.5972}&{ - 4.5774} \end{array}} \right]\),

\({K_3} = \left[ {\begin{array}{*{20}{c}} {\mathrm{{ - 24}}\mathrm{{.5182}}}&{\mathrm{{ - 4}}\mathrm{{.5793}}} \end{array}} \right]\), \({K_4} = \left[ {\begin{array}{*{20}{c}} {\mathrm{{ - 28}}\mathrm{{.4887}}}&{ - \mathrm{{5}}\mathrm{{.7050}}} \end{array}} \right]\).

Based on the feasible solution obtained above, we assume that the initial state \({x_0} = {\left[ {\begin{array}{*{20}{c}} {0.3}&{0.2} \end{array}} \right] ^{\mathrm{{T}}}}\) and disturbance input \({w_k} = {0.9^k}\cos (k)\). Then, a simulation is carried out with the proposed event-triggered asynchronous controller. The simulation results are shown in Figs. 2, 3 and 4. It can be observed that the system state, output and control input tend to be stable gradually from Fig. 2; namely, the system is stochastically stable. Furthermore, we can find from Fig. 3 that the amount of data transmission has decreased significantly. Figure 4 shows the modes of the controller and the plant, which are asynchronous.

Fig. 2
figure 2

System state, output and control input

Fig. 3
figure 3

The event-triggered interval

Fig. 4
figure 4

The modes of the controller and the plant

Then, we will examine the event-triggered performance by changing the threshold \(\delta\). Based on Theorem 2.2, the simulation results are obtained in Table 1. It is easy to observe that as \(\delta\) increases, the optimal control performance decreases slightly, while DTP is greatly improved. Therefore, we can choose a relatively appropriate \(\delta\) to effectively reduce the data transmission rate on the premise that the system has satisfying dissipative performance. Then, we will discuss the impact of different packet dropout rates on system performance. As shown in Table 2, we can find that when \(\beta\)=1 which stands for no packet dropout, the control performance of the system is the best. With the increase of packet dropout rate, the control performance becomes worse.

Table 1 Dissipative performance \({\gamma ^*}\) and data transmission performance
Table 2 Dissipative performance \({\gamma ^*}\) under different packet dropout rates

Next, we will investigate three control performances (i.e., dissipative performance, \({H_\infty }\) performance and passive performance) under different CPMs: synchronous, weakly asynchronous, strongly asynchronous and completely asynchronous cases. The corresponding CPMs are as follows:

Case 1: \(\Psi = \left[ {\begin{array}{*{20}{c}} 1&{}0&{}0&{}0\\ 0&{}1&{}0&{}0\\ 0&{}0&{}1&{}0\\ 0&{}0&{}0&{}1 \end{array}} \right]\), Case 2: \(\Psi = \left[ {\begin{array}{*{20}{c}} 1&{}0&{}0&{}0\\ 0&{}1&{}0&{}0\\ {0.3}&{}{0.2}&{}{0.4}&{}{0.1}\\ {0.4}&{}{0.2}&{}{0.2}&{}{0.2} \end{array}} \right]\),

Case 3: \(\Psi = \left[ {\begin{array}{*{20}{c}} 1&{}0&{}0&{}0\\ {0.1}&{}{0.2}&{}{0.3}&{}{0.4}\\ {0.3}&{}{0.2}&{}{0.4}&{}{0.1}\\ {0.4}&{}{0.2}&{}{0.2}&{}{0.2} \end{array}} \right]\), Case 4: \(\Psi = \left[ {\begin{array}{*{20}{c}} {0.2}&{}{0.25}&{}{0.4}&{}{0.15}\\ {0.1}&{}{0.2}&{}{0.3}&{}{0.4}\\ {0.3}&{}{0.2}&{}{0.4}&{}{0.1}\\ {0.4}&{}{0.2}&{}{0.2}&{}{0.2} \end{array}} \right]\).

Furthermore, the parameters \((\mu , \vartheta , \upsilon )\) for three different performances are presented in Table 3. The optimal performance \({\gamma ^*}\) obtained by solving LMIs in Theorem 2.2 is shown in Table 4. We know that the smaller \({\gamma ^*}\) means, the worse dissipative performance, while the better \({H_\infty }\) performance and passive performance [46]. Transparently, the higher the asynchronous level between the physical plant and the controller is, the worse the control performance we obtain.

Table 3 The values of \(\mu\), \(\vartheta\) and \(\upsilon\) for different performances
Table 4 The values of optimal performance \({\gamma ^*}\) under different performances

4 Conclusions

In this paper, the dissipative asynchronous control issue has been investigated for networked time-delay MJSs with event-triggered scheme and packet dropouts. An event-triggered strategy has been introduced to reduce the communication pressure, and an HHM has been used to describe the asynchronization between the controller and the physical plant. By using Lyapunov–Krasovskii function and dissipative theory, and combining slack matrix and matrix scaling techniques, a controller design method has been obtained. Finally, an example of robotic arm system has been taken to illustrate the effectiveness of our obtained approach. The relationship between dissipative performance, data transmission performance and event-triggered threshold has also been discussed. In the future, it will be worthy of our further investigation on the asynchronous control for networked MJSs with partially unknown TPs and CPs. Moreover, how to design more novel and interesting event-triggered mechanism and packet dropout strategy to save communication resources and further improve control performance is also our goal in the future.