1 Introduction

Iterative learning control (ILC) is an effective technique for systems that could perform the same task over a finite time interval repetitively. It updates the control input signal only depending on I/O data of previous iteration, and the tracking performance can become better and better. Due to its simplicity and effectiveness, ILC has been widely applied in many practical systems such as robotics, chemical batch processes, hard disk drives and urban traffic systems [18].

Time delay, which is a source of instability and poor performance, often appears in practical systems due to the finite speed of signal transmission and information processing. As a result, studies on batch process control have attracted considerable attention [913]. Recently, ILC has been introduced to systems with time delays to improve the tracking performance. In [14], a new ILC method is proposed for a class of linear systems with time delay using a holding mechanism. In [15], an ILC algorithm integrated with Smith predictor for batch processes with fixed time delay is proposed and analyzed in the frequency domain; itcan obtain perfect tracking performance under certain conditions. In [16], an ILC scheme is proposed for systems with time delay and model uncertainties based on the internal model control principle. In [17], a two-dimensional (2D) model based on ILC methods for systems both with state delays and with input delays is presented; necessary and sufficient conditions for the stability of ILC are also provided. In [18], the problem of ILC design for time-delay systems in the presence of initial shifts is considered, and the 2D system theory is employed to develop a convergence condition for both asymptotic stability and monotonic convergence of ILC.

In these existing studies [1418], the time delay considered is known and fixed constants. As we know, many practical systems suffer from time-varying delays, which are less conservative than constant delays. It is a challenge to design ILC for systems with time-varying delays. There have already been a few results on this issue. In [19], a robust state feedback integrated with ILC scheme is proposed for batch processes with interval time-varying delay. The design is considered using a 2D Roesser model based on the general 2D system theory [20]. In [21], a robust closed-loop ILC scheme is proposed for batch processes with state delay and time-varying uncertainties, and the batch processes are described as a 2D FM model. In [22], a robust output feedback incorporated with ILC scheme is proposed for a kind of batch process with uncertainties and interval time-varying delay. The batch process is transformed into a 2D FM model with a delay varying in a range, and the design is cast as a robust H∞ control for uncertain 2D systems. It is noticed that almost all available results on ILC systems with time delays are based on an implicit assumption that sensor output measurement is perfect. However, the assumption is often not true in most cases in practice. The main reason is that sensors may suffer from probabilistic signal missing especially in a networked environment [2325].

Actually, the problem of ILC for networked control systems (NCSs) with packet dropouts has received some attention in the research field. The stability of ILC for linear and nonlinear systems with intermittent measurement is investigated in [26, 27]. Some robust ILC designs are proposed for NCSs to suppress the effect of data dropouts in [2833]. However, to our best knowledge, no work considering ILC systems with data dropouts and time delay simultaneously has been done up to now.

This paper proposes a robust ILC design scheme for uncertain linear systems with time-varying delays and random packet dropouts. Here the considered systems are implemented in a network environment, where data packet may be missed during transmission. For convenience, only the measurement packet dropout is taken into account. The packet dropout is modeled by an arbitrary stochastic sequence satisfying the Bernoulli binary distribution, which renders the ILC system to be stochastic instead of a deterministic one. Then, a 2D stochastic Roesser model with a delay varying in a range is established to describe the entire dynamics. Based on 2D system theory, the ILC law is designed to guarantee mean-square asymptotic stability of the considered 2D stochastic systems. Afterwards, a delay-dependent stability condition is derived in terms of linear matrix inequalities (LMIs), and formulas can be given for the ILC law design. Finally, an example for the injection molding is given to demonstrate the effectiveness of the proposed ILC method.

Throughout the present paper, the following notations are used. The superscript ‘T’ denotes the matrix transposition, I denotes the identity matrix, 0 denotes the zero vector or matrix with the required dimensions. \(\operatorname{diag} \{ \bullet \}\) denotes the standard (block) diagonal matrix whose off-diagonal elements are zero. In symmetric block matrices, an asterisk ∗ is used to denote the term that is induced by symmetry. The notation \(\Vert \bullet \Vert \) refers to the Euclidean vector norm, \(E \{ x \},E \{ x|y \}\) mean the expectation of x and the expectation of x conditional on y, respectively. Matrices, if the dimensions are not explicitly stated, are assumed to be compatible for algebraic operations.

2 Problem formulation and 2D system representation

Consider the following linear discrete time system with time-varying delay:

$$ \left \{ \textstyle\begin{array}{@{}l} x(t + 1,k) = ( A + \Delta A )x(t,k) + ( A_{d} + \Delta A_{d} )x(t - d(t),k) + Bu(t,k), \\ y(t,k) = Cx(t,k), \\ x(0,k) = x_{0k}, \end{array}\displaystyle \right . $$
(1)

where k denotes iteration, t denotes discrete time. \(x(t,k),u(t,k),y(t,k)\) are state, input and output variables, \(A,A_{d},B,C\) are the matrices with appropriate dimensions describing the system in the state space. \(x_{0k}\) stands for the initial condition of the process in the kth iteration. \(d(t)\) is time-varying delay satisfying

$$d_{m} \le d(t) \le d_{M}, $$

where \(d_{m},d_{M}\) denote the lower and upper delay bounds. \(\Delta A,\Delta A_{d}\) denote admissible uncertain perturbations of matrices A and \(A_{d}\), which can be represented as

$$\Delta A = E\Sigma F_{1},\qquad \Delta A_{d} = E\Sigma F_{2}, $$

where \(E,F_{1},F_{2}\) are known real constant matrices characterizing the structures of uncertain perturbations, and Σ is an uncertain perturbation of the system that satisfies \(\Sigma^{T}\Sigma \le I\).

For system (1), design the following ILC update law:

$$ u(t,k + 1) = u(t,k) + \Delta u(t,k), $$
(2)

where \(\Delta u(t,k)\) is the ILC update law to be designed.

Denote \(e(t,k) = y_{d}(t) - y(t,k)\), \(\eta (t,k) = x(t - 1,k + 1) - x(t - 1,k)\). From (1) and (2), we can obtain

$$ \begin{bmatrix} \eta (t + 1,k) \\ e(t,k + 1) \end{bmatrix} = \tilde{A} \begin{bmatrix} \eta (t,k) \\ e(t,k) \end{bmatrix} + \tilde{A}_{d} \begin{bmatrix} \eta (t - d(t),k) \\ e(t,k - h(k)) \end{bmatrix} + \bar{B}\Delta u(t - 1,k), $$
(3)

where

$$\begin{aligned}& \tilde{A} = \bar{A} + \Delta \bar{A},\qquad\tilde{A}_{d} = ( \bar{A}_{d} + \Delta \bar{A}_{d} ), \\& \bar{A} = \begin{bmatrix} A & 0 \\ - CA & I \end{bmatrix},\qquad \bar{A}_{d} = \begin{bmatrix} A_{d} & 0 \\ - CA_{d} & 0 \end{bmatrix},\qquad \bar{B} = \begin{bmatrix} B \\ - CB \end{bmatrix}, \\& \Delta \bar{A} = \bar{E}\Sigma \bar{F}_{1},\qquad\Delta \bar{A}_{d} = \bar{E}\Sigma \bar{F}_{2}, \\& \bar{E} = \begin{bmatrix} E \\ - CE \end{bmatrix},\qquad \bar{F}_{1} = [ F_{1} \quad 0 ],\qquad \bar{F}_{2} = [ F_{2} \quad 0 ], \end{aligned}$$

\(h(k)\) is a new delay variable satisfying \(h_{m} \le h(k) \le h_{M}\) with \(h_{m},h_{M}\) denoting the lower and upper delay bounds. The new vector \(e(t,k - h(k))\) is introduced here as is also done in [14] so that system (1) can be modeled as a normal 2D system with interval time delay.

Consider the following ILC update laws:

$$ \Delta u(t - 1,k) = K\begin{bmatrix} x(t - 1,k + 1) - x(t - 1,k)\\ e(t,k) \end{bmatrix}, $$
(4)

where \(K = [K_{1},K_{2}]\) is a gain matrix to be designed. ILC law (4) contains two parts: a P-type ILC law and a state feedback control law. This control scheme has the advantages of feedback loop such as robustness and meanwhile enjoys the extra performance improvement from ILC.

It is assumed that the ILC law is implemented in a networked control system as shown in Figure 1, where the network only exists on the side from plant to controller for convenience. The data \(x(t,k + 1),x(t,k),e(t,k)\) are transferred as one whole packet from the plant to the controller. It is further supposed that the controller can detect whether the packet is dropped or not. If the data packet is missed, then \(\Delta u(t,k) = 0\), that is, \(u(t,k + 1) = u(t,k)\). If the data packet is not missed, the term \(\Delta u(t,k)\) is calculated as (4). In this case, the ILC update law (4) can be described as

$$ \Delta u(t - 1,k) = \alpha (t,k) K\begin{bmatrix} x(t - 1,k + 1) - x(t - 1,k) \\ e(t,k) \end{bmatrix}, $$
(5)

where the stochastic parameter \(\alpha (t,k)\) is a random Bernoulli variable taking the values of 0 and 1 with

$$ \begin{aligned} &\operatorname{Prob} \bigl\{ \alpha (t,k) = 1 \bigr\} = E \bigl\{ \alpha (t,k) \bigr\} = \alpha, \\ &E \bigl\{ \alpha^{2}(t,k) \bigr\} = \alpha (1 - \alpha ), \end{aligned} $$
(6)

in which α satisfying \(0 \le \alpha \le 1\) is a known constant.

Figure 1
figure 1

Schematic diagram of the ILC systems with data dropouts.

From (3) and (4), it can be derived

$$ \begin{bmatrix} \eta (t + 1,k) \\ e(t,k + 1) \end{bmatrix} = \bigl( \tilde{A} + \alpha (t,k)\bar{B}K \bigr) \begin{bmatrix} \eta (t,k) \\ e(t,k) \end{bmatrix} + \tilde{A}_{d} \begin{bmatrix} \eta (t - d(t),k) \\ e(t,k - h(k)) \end{bmatrix}. $$
(7)

Denote \(\eta (t,k) = x^{h}(i,j),e(t,k) = x^{v}(i,j)\), system (7) can be rewritten as the following typical 2D Roesser system:

$$ \begin{bmatrix} x^{h}(i + 1,j) \\ x^{v}(i,j + 1) \end{bmatrix} = \bigl( \tilde{A} + \alpha (i,j)\bar{B}K \bigr) \begin{bmatrix} x^{h}(i,j) \\ x^{v}(i,j) \end{bmatrix} + \tilde{A}_{d} \begin{bmatrix} x^{h}(i - d(i),j) \\ x^{v}(i,j-h(j)) \end{bmatrix}, $$
(8)

where the bounded conditions are defined by

$$ \textstyle\begin{array}{@{}l} x^{h}(i,j) = \rho_{ij},\quad \forall 0 \le j < r_{1}, - d_{M} \le i < 0, \\ x^{v}(i,j) = \sigma_{ij},\quad \forall 0 \le i < r_{2}, - h_{M} \le j < 0, \\ \rho_{00} = \sigma_{00}, \end{array} $$
(9)

where \(r_{1} < \infty\) and \(r_{2} < \infty\) are positive integers, \(\rho_{ij}\) and \(\sigma_{ij}\) are given vectors.

Remark 1

It is worth pointing out that 2D system (8) is a stochastic system due to the introduction of the stochastic variable \(\alpha (i,j)\). It differs from the deterministic 2D or ILC systems with time delay in recent works such as [1722]. Here, the ILC design should be discussed under the framework of stochastic stability. To this end, we need to give the following definition of stochastic stability for 2D systems.

Definition 1

[32]

For all initial bound conditions in (9), 2D system (8) is said to be mean-square asymptotically stable if the following is satisfied:

$$\lim_{i + j \to \infty} E \bigl\{ \bigl\Vert x(i,j) \bigr\Vert ^{2} \bigr\} = 0. $$

3 Stability analysis and controller design

3.1 Stability analysis

In this section, we focus on the problem of robust stability and robust stabilization for 2D stochastic system (8) using an LMI technique. The following lemma is needed in the proof of our main result.

Lemma 1

[34]

For any vector \(\delta (t) \in \mathbb{R}^{n}\), two positive integers \(\kappa_{0},\kappa_{1}\) and matrix \(0 < R \in \mathbb{R}^{n \times n}\), the following inequality holds:

$$- (\kappa_{1} - \kappa_{0} + 1)\sum _{t = \kappa_{0}}^{\kappa_{1}} \delta^{T} (t)R\delta (t) \le - \sum_{t = \kappa_{0}}^{\kappa_{1}} \delta^{T} (t)R\sum _{t = \kappa_{0}}^{\kappa_{1}} \delta (t). $$

Now, we can give our main result.

Theorem 1

Given positive integers \(d_{m},d_{M},h_{m},h_{M}\), 2D system (8) is mean-square asymptotically stable if there exists a positive definite symmetric matrix \(P = \bigl [{\scriptsize\begin{matrix}{}P^{h} & \cr & P^{v} \end{matrix}} \bigr ]\), \(Q = \bigl [{\scriptsize\begin{matrix}{} Q^{h} & \cr & Q^{v} \end{matrix}} \bigr ]\), \(M = \bigl [{\scriptsize\begin{matrix}{} M^{h} & \cr & M^{v} \end{matrix}} \bigr ]\), and \(G = \bigl [{\scriptsize\begin{matrix}{} G^{h} & \cr & G^{v} \end{matrix}} \bigr ]\) such that the following matrix inequality holds:

$$ {{\begin{aligned}[b] \begin{bmatrix} \psi_{1} & 0 & G & ( \tilde{A} + \alpha \bar{B}K )^{T}P & \theta (\bar{B}K)^{T}P & ( ( \tilde{A} + \alpha \bar{B}K )^{T} - I )G & \theta (\bar{B}K)^{T}G \\ & - Q & & \tilde{A}_{d}^{T}P & 0 & \tilde{A}_{d}^{T}G & 0 \\ & & - M - G & 0 & 0 & 0 & 0 \\ & & & - P & 0 & 0 & 0 \\ & & & & - P & 0 & 0 \\ & * & & & & - H^{ - 2}G & \\ & & & & & & - H^{ - 2}G \end{bmatrix} < 0, \end{aligned}}} $$
(10)

where

$$\begin{aligned}& \psi_{1} = - P + M + DQ + Q - G,\qquad D = \begin{bmatrix} ( d_{M} - d_{m} )I & 0 \\ 0 & ( h_{M} - h_{m} )I \end{bmatrix}, \\& H = \begin{bmatrix} d_{M}I & 0 \\ 0 & h_{M}I \end{bmatrix},\qquad\theta^{2} = \alpha (1 - \alpha ). \end{aligned}$$

Proof

Define \(\tilde{\alpha}_{i,j} = \alpha (i,j) - \alpha\), it is obvious that

$$E \{ \tilde{\alpha}_{i,j} \} = 0,\qquad E \{ \tilde{ \alpha}_{i,j}\tilde{\alpha}_{i,j} \} = \alpha (1 - \alpha ). $$

Denote

$$\begin{aligned}& x_{d}(i,j) = \begin{bmatrix} x^{h}(i - d(i),j)\\ x^{v}(i,j - h(j)) \end{bmatrix},\qquad x_{M}(i,j) = \begin{bmatrix} x^{h}(i - d_{\mathrm{M}},j) \\ x^{v}(i,j - h_{\mathrm{M}}) \end{bmatrix}, \\& \delta^{h}(r,j) = x^{h}(r + 1,j) - x^{h}(r,j), \qquad \delta^{v}(i,t) = x^{v}(i,t + 1) - x^{v}(i,t), \\& \varphi (i,j) = \bigl[ x^{T}(i,j) \quad x_{d}^{T}(i,j) \quad x_{M}^{T}(i,j) \bigr] ^{T}, \\& \varpi (i,j) = \bigl( x^{h}(i,j),x^{h}(i - 1,j), \ldots,x^{h}\bigl(i - d(i),j\bigr),x^{v}(i,j),x^{v}(i,j - 1), \ldots,x^{v}\bigl(i,j - h(j)\bigr) \bigr). \end{aligned}$$

Consider the following Lyapunov function candidate:

$$V\bigl(x(i,j)\bigr) = V^{h}\bigl(x^{h}(i,j)\bigr) + V^{v}\bigl(x^{v}(i,j)\bigr), $$

where

$$\begin{aligned} &V^{h}\bigl(x^{h}(i,j)\bigr) = \sum_{k = 1}^{5} V_{k}^{h} \bigl(x^{h}(i,j)\bigr),\qquad V_{1}^{h} \bigl(x^{h}(i,j)\bigr) = x^{hT}(i,j)P^{h}x^{h}(i,j), \\ &V_{2}^{h}\bigl(x^{h}(i,j)\bigr) = \sum _{r = i - d(i)}^{i - 1} x^{hT}(r,j)Q^{h}x^{h}(r,j), \qquad V_{3}^{h}\bigl(x^{h}(i,j)\bigr) = \sum _{r = i - d_{M}}^{i - 1} x^{hT}(r,j)M^{h}x^{h}(r,j), \\ &V_{4}^{h}\bigl(x^{h}(i,j)\bigr) = \sum _{s = - d_{M}}^{ - d_{m}} \sum_{r = i + s}^{i - 1} x^{hT}(r,j)Q^{h}x^{h}(r,j), \\ &V_{5}^{h}\bigl(x^{h}(i,j)\bigr) = d_{M}\sum_{s = - d_{M}}^{ - 1} \sum _{r = i + s}^{i - 1} \delta^{hT} (r,j)G^{h}\delta^{h}(r,j), \\ &V^{v}\bigl(x^{v}(i,j)\bigr) = \sum _{k = 1}^{5} V_{k}^{v} \bigl(x^{v}(i,j)\bigr),\qquad V_{1}^{v} \bigl(x^{v}(i,j)\bigr) = x^{vT}(i,j)P^{v}x^{v}(i,j), \\ &V_{2}^{v}\bigl(x^{v}(i,j)\bigr) = \sum _{t = j - h(j)}^{j - 1} x^{vT}(i,t)Q^{v}x^{v}(i,t), \qquad V_{3}^{v}\bigl(x^{v}(i,j)\bigr) = \sum _{t = j - h_{M}}^{j - 1} x^{vT}(i,t)M^{v}x^{v}(i,t), \\ &V_{4}^{v}\bigl(x^{v}(i,j)\bigr) = \sum _{s = - h_{M}}^{ - h_{m}} \sum_{t = j + s}^{j - 1} x^{vT}(i,t)Q^{v}x^{v}(i,t), \\ &V_{5}^{v}\bigl(x^{v}(i,j)\bigr) = h_{M}\sum_{s = - h_{M}}^{ - 1} \sum _{t = j + s}^{j - 1} \delta^{vT} (i,t)G^{v}\delta^{v}(i,t), \end{aligned}$$
(11)

and \(P^{h},P^{v},Q^{h},Q^{v},M^{h},M^{v},G^{h},G^{v}\) are positive definite matrices to be determined.

Define the following index:

$$\begin{aligned} J &\stackrel{\Delta}{=} E \bigl\{ \Delta V\bigl(x(i,j)\bigr) \vert \varpi (i,j) \bigr\} \\ &= E \left\{ \left(\textstyle\begin{array}{@{}c@{}} V^{h}(x^{h}(i + 1,j)) - V^{h}(x^{h}(i,j)) \\ + V^{v}(x^{v}(i,j + 1)) - V^{v}(x^{v}(i,j)) \end{array}\displaystyle \right) \bigg| \varpi (i,j) \right\} \\ &= E \Biggl\{ \Biggl( \sum_{k = 1}^{5} \Delta V_{k}^{h} \bigl(x^{h}(i,j)\bigr) + \sum _{k = 1}^{5} \Delta V_{k}^{v} \bigl(x^{v}(i,j)\bigr) \Biggr) \bigg| \varpi (i,j) \Biggr\} . \end{aligned}$$
(12)

Calculating (12) along the solutions of system (8), we can obtain

$$\begin{aligned} & E \bigl\{ \Delta V_{1}^{h} \bigl(x^{h}(i,j)\bigr) \vert \varpi (i,j) \bigr\} \\ &\quad= E \bigl\{ x^{hT}(i + 1,j)P^{h}x^{h}(i + 1,j) \vert \varpi (i,j) \bigr\} - x^{hT}(i,j)P^{h}x^{h}(i,j), \end{aligned}$$
(13)
$$\begin{aligned} &E \bigl\{ \Delta V_{2}^{h} \bigl(x^{h}(i,j)\bigr) \vert \varpi (i,j) \bigr\} \\ &\quad= E \Biggl\{ \Biggl( \sum_{r = i + 1 - d(i + 1)}^{i} x^{hT}(r,j)Q^{h}x^{h}(r,j) - \sum _{r = i - d(i)}^{i} x^{hT}(r,j)Q^{h} x^{h}(r,j) \Biggr) \bigg| \varpi (i,j) \Biggr\} \\ &\quad\le E \Biggl\{ \Biggl( x^{hT}(i,j)Q^{h}x^{h}(i,j) - x_{d}^{hT}(i,j)Q^{h}x_{d}^{h}(i,j) \\ &\qquad{} + \sum_{r = i + 1 - d_{M}}^{i - d_{m}} x^{hT}(r,j)Q^{h}x^{h}(r,j) \Biggr) \bigg| \varpi (i,j) \Biggr\} , \end{aligned}$$
(14)
$$\begin{aligned} &E \bigl\{ \Delta V_{3}^{h} \bigl(x^{h}(i,j)\bigr) \vert \varpi (i,j) \bigr\} \\ &\quad= E \bigl\{ \bigl( x^{hT}(i,j)M^{h}x^{h}(i,j) - x_{M}^{hT}(i,j)M^{h}x_{M}^{h}(i,j) \bigr) \vert \varpi (i,j) \bigr\} , \end{aligned}$$
(15)
$$\begin{aligned} &E \bigl\{ \Delta V_{4}^{h} \bigl(x^{h}(i,j)\bigr) \vert \varpi (i,j) \bigr\} \\ &\quad= E \Biggl\{ \Biggl( (d_{M} - d_{m})x^{hT}(i,j)Q^{h}x^{h}(i,j) - \sum_{r = i + 1 - d_{M}}^{i - d_{m}} x^{hT}(r,j)Q^{h}x^{h}(r,j) \Biggr) \bigg| \varpi (i,j) \Biggr\} , \end{aligned}$$
(16)
$$\begin{aligned} &E \bigl\{ \Delta V_{5}^{h} \bigl(x^{h}(i,j)\bigr) \vert \varpi (i,j) \bigr\} \\ &\quad= E \Biggl\{ \Biggl( d_{M}^{2} \delta^{hT}(i,j)G^{h}\delta^{h}(i,j) - d_{M}\sum_{r = i - d_{M}}^{i - 1} \delta^{hT}(r,j)G^{h}\delta^{h}(r,j) \Biggr) \bigg| \varpi (i,j) \Biggr\} , \end{aligned}$$
(17)

and

$$\begin{aligned} &E \bigl\{ \Delta V_{1}^{v} \bigl(x^{v}(i,j)\bigr) \vert \varpi (i,j) \bigr\} \\ &\quad= E \bigl\{ x^{vT}(i,j + 1)P^{v}x^{v}(i,j + 1) \vert \varpi (i,j) \bigr\} - x^{vT}(i,j)P^{v}x^{v}(i,j), \end{aligned}$$
(18)
$$\begin{aligned} &E \bigl\{ \Delta V_{2}^{v} \bigl(x^{v}(i,j)\bigr) \vert \varpi (i,j) \bigr\} \\ &\quad= E \Biggl\{ \Biggl( \sum_{t = j + 1 - d(j + 1)}^{j} x^{vT}(i,t)Q^{v}x^{v}(i,t) - \sum _{t = j - h(j)}^{j - 1} x^{vT}(i,t)Q^{v} x^{v}(i,t) \Biggr) \bigg| \varpi (i,j) \Biggr\} \\ &\quad\le E \Biggl\{ \Biggl( x^{vT}(i,j)Q^{v}x^{v}(i,j)x^{vT} - x_{d}^{vT}(i,j)Q^{v}x_{d}^{v}(i,j) \\ &\qquad{}+ \sum_{t = j + 1 - h_{M}}^{j - h_{m}} x^{vT}(i,t)Q^{v}x^{v}(i,t) \Biggr) \bigg| \varpi (i,j) \Biggr\} , \end{aligned}$$
(19)
$$\begin{aligned} &E \bigl\{ \Delta V_{3}^{v} \bigl(x^{v}(i,j)\bigr) \vert \varpi (i,j) \bigr\} \\ &\quad= E \bigl\{ \bigl( x^{vT}(i,j)M^{v}x^{v}(i,j) - x_{M}^{vT}(i,j)M^{v}x_{M}^{v}(i,j) \bigr) \vert \varpi (i,j) \bigr\} , \end{aligned}$$
(20)
$$\begin{aligned} &E \bigl\{ \Delta V_{4}^{v} \bigl(x^{v}(i,j)\bigr) \vert \varpi (i,j) \bigr\} \\ &\quad= E \Biggl\{ \Biggl( (h_{M} - h_{m})x^{vT}(i,j)Q^{v}x^{v}(i,j) - \sum_{t = j + 1 - h_{M}}^{j - h_{m}} x^{vT}(i,t)Q^{v}x^{v}(i,t) \Biggr) \bigg| \varpi (i,j) \Biggr\} , \end{aligned}$$
(21)
$$\begin{aligned} &E \bigl\{ \Delta V_{5}^{v} \bigl(x^{v}(i,j)\bigr) \vert \varpi (i,j) \bigr\} \\ &\quad= E \Biggl\{ \Biggl( h_{M}^{2} \delta^{vT}(i,j)G^{v}\delta^{v}(i,j) - h_{M}\sum_{t = j - d_{M}}^{j - 1} \delta^{vT}(i,t)G^{v}\delta^{v}(i,t) \Biggr) \bigg| \varpi (i,j) \Biggr\} . \end{aligned}$$
(22)

For (17) and (22), using Lemma 1 yields

$$\begin{aligned} &E \bigl\{ \Delta V_{5}^{h} \bigl(x^{h}(i,j)\bigr) \vert \varpi (i,j) \bigr\} \\ &\quad\le E \Biggl\{ \Biggl( d_{M}^{2} \delta^{hT}(i,j)G^{h}\delta^{h}(i,j) - \sum _{r = i - d_{M}}^{i - 1} \delta^{hT} (r,j)G^{h} \sum_{r = i - d_{M}}^{i - 1} \delta^{h}(r,j) \Biggr) \bigg| \varpi (i,j) \Biggr\} \\ &\quad= E \left\{ \left( \textstyle\begin{array}{@{}c@{}} d_{M}^{2}(x^{h}(i + 1,j) - x^{h}(i,j))^{T}G^{h}(x^{h}(i + 1,j) - x^{h}(i,j)) \\ - (x^{h}(i,j) - x_{M}^{h}(i,j))^{T}G^{h}(x^{h}(i,j) - x_{M}^{h}(i,j)) \end{array}\displaystyle \right) \bigg| \varpi (i,j) \right\}, \end{aligned}$$
(23)

and

$$\begin{aligned} &E \bigl\{ \Delta V_{5}^{v} \bigl(x^{v}(i,j)\bigr) \vert \varpi (i,j) \bigr\} \\ &\quad\le E \Biggl\{ \Biggl( h_{M}^{2} \delta^{vT}(i,j)G^{v}\delta^{v}(i,j) - \sum _{t = j - d_{M}}^{j - 1} \delta^{vT} (i,t)G^{v} \sum_{t = j - d_{M}}^{j - 1} \delta^{v}(i,t) \Biggr) \bigg| \varpi (i,j) \Biggr\} \\ &\quad= E \left\{ \left( \textstyle\begin{array}{@{}c@{}} h_{M}^{2}(x^{v}(i,j + 1) - x^{v}(i,j))^{T}G^{v}(x^{v}(i,j + 1) - x^{v}(i,j)) \\ - (x^{v}(i,j) - x_{M}^{v}(i,j))^{T}M^{v}(x^{v}(i,j) - x_{M}^{v}(i,j)) \end{array}\displaystyle \right) \bigg| \varpi (i,j) \right\}. \end{aligned}$$
(24)

Since \(\tilde{\alpha}_{i,j}\) is independent with \(x(i,j)\) and \(x_{d}(i,j)\), we have

$$\begin{aligned} &E \bigl\{ \bigl( x^{hT}(i + 1,j)P^{h}x^{h}(i + 1,j) \bigr) \vert \varpi (i,j) \bigr\} + E \bigl\{ \bigl( x^{vT}(i,j + 1)P^{v}x^{v}(i,j + 1) \bigr) \vert \varpi (i,j) \bigr\} \\ &\quad= E \left\{ \left( \textstyle\begin{array}{@{}c@{}} x^{h}(i + 1,j) \\ x^{v}(i,j + 1) \end{array}\displaystyle \right)^{T}P \left( \textstyle\begin{array}{c} x^{h}(i + 1,j) \\ x^{v}(i,j + 1) \end{array}\displaystyle \right) \bigg| \varpi (i,j) \right\} \\ &\quad= E \left\{ \begin{bmatrix} x(i,j) \\ x_{d}(i,j) \end{bmatrix}^{T} \left( \begin{bmatrix} ( \tilde{A} + \alpha \bar{B}K )^{T} & \theta (\bar{B}K)^{T} \\ \tilde{A}_{d}^{T} & 0 \\ 0 & 0 \end{bmatrix} \begin{bmatrix} P & \\ & P \end{bmatrix} \begin{bmatrix} ( \tilde{A} + \alpha \bar{B}K ) & \tilde{A}_{d} & 0 \\ \theta \bar{B}K & 0 & 0 \end{bmatrix} \right)\right. \\ &\left.\qquad{}\times\begin{bmatrix} x(i,j) \\ x_{d}(i,j) \end{bmatrix} \bigg| \varpi (i,j) \right\}. \end{aligned}$$
(25)

From (17)-(24) and using the result in (25), we can obtain

$$J \le E \bigl\{ \varphi^{T}(i,j)\psi \varphi (i,j) \vert \varpi (i,j) \bigr\} , $$

where

$$ \psi = \begin{bmatrix} \psi_{1} & 0 & G \\ & - Q & 0 \\ * & & - M - G \end{bmatrix} + \Lambda_{1}^{T} \begin{bmatrix} P & 0 \\ 0 & P \end{bmatrix} \Lambda_{1} + \Lambda_{2}^{T} \begin{bmatrix} H^{2}G & \\ & H^{2}G \end{bmatrix} \Lambda_{2}, $$
(26)

and

$$\Lambda_{1} = \begin{bmatrix} ( \tilde{A} + \alpha \bar{B}K ) & \tilde{A}_{d} & 0 \\ \theta \bar{B}K & 0 & 0 \end{bmatrix},\qquad \Lambda_{2} = \begin{bmatrix} ( \tilde{A} + \alpha \bar{B}K ) - I & \tilde{A}_{d} & 0 \\ \theta \bar{B}K & 0 & 0 \end{bmatrix}. $$

By using Schur complements, it is obvious that (10) is equivalent to (26). Since \(\psi < 0\), it is obvious that

$$ E \bigl\{ \bigl( V^{h}\bigl(x^{h}(i + 1,j)\bigr) - V^{h}\bigl(x^{h}(i,j)\bigr) + V^{v} \bigl(x^{v}(i,j + 1)\bigr) - V^{v}\bigl(x^{v}(i,j) \bigr) \bigr) \vert \varpi (i,j) \bigr\} < 0. $$
(27)

Taking mathematical expectation on both sides of (27) leads to

$$ E \bigl\{ V^{h}\bigl(x^{h}(i + 1,j)\bigr) + V^{v}\bigl(x^{v}(i,j + 1)\bigr) \bigr\} \le E \bigl\{ V^{h}\bigl(x^{h}(i,j)\bigr) + V^{v} \bigl(x^{v}(i,j)\bigr) \bigr\} . $$
(28)

Summing up both sides of (28) from N to 0 with respect to i and 0 to N with respect to j, for any nonnegative integer N and considering boundary conditions (9), we have

$$\begin{aligned} &E \left\{ \textstyle\begin{array}{@{}c@{}} V^{h}(x^{h}(1,N)) + V^{v}(x^{v}(0,N + 1)) + V^{h}(x^{h}(2,N - 1)) \\ + V^{v}(x^{v}(1,N)) + \cdots + V^{h}(x^{h}(N + 1,0)) + V^{v}(x^{v}(N,1)) \end{array}\displaystyle \right\} \\ &\quad= \sum_{i + j = N + 1} E \bigl\{ V^{h} \bigl(x^{h}(i,j)\bigr) \bigr\} + \sum_{i + j = N + 1} E \bigl\{ V^{v}\bigl(x^{v}(i,j)\bigr) \bigr\} \\ &\quad= \sum _{i + j = N + 1} E \left\{ V \left( \begin{bmatrix} x^{h}(i,j) \\ x^{v}(i,j) \end{bmatrix} \right) \right \} \\ &\quad\le E \left\{ V \left( \begin{bmatrix} x^{h}(0,N) \\ x^{v}(0,N) \end{bmatrix} \right) + \cdots + V \left( \begin{bmatrix} x^{h}(N,0) \\ x^{v}(N,0) \end{bmatrix} \right) \right\} \\ &\quad= \sum_{i + j = N} E \left\{ V \left( \begin{bmatrix} x^{h}(i,j) \\ x^{v}(i,j) \end{bmatrix} \right) \right\}. \end{aligned}$$
(29)

It is clear that the sum of the Lyapunov functional value decreases along the state trajectories. Then, from [15], we can conclude that

$$\lim_{i + j \to \infty} E \bigl\{ \bigl\Vert x(i,j) \bigr\Vert ^{2} \bigr\} = 0, $$

which implies that system (8) is asymptotically stable.

This completes the proof of Theorem 1. □

Remark 2

Theorem 1 provides a sufficient condition for 2D uncertain systems with a delay varying in a range and random packet dropouts. If the communication link existing between the plant and the controller is perfect, that is, there is no packet dropout during their transmission, then \(\alpha = 1\) and \(\theta = 0\). In this case, the condition in Theorem 1 becomes the condition obtained in [19, 20] for a 2D deterministic system with time delay. From this point of view, Theorem 1 can be seen as an extension to 2D time-delay systems with packet dropout.

3.2 Controller design

Theorem 1 gives a mean-square asymptotic stability condition where the controller gain matrix K is known. However, our eventual purpose is to determine a suitable K by system matrices \(A,A_{d},B,E,F_{1},F_{2}\) and parameter α.

Lemma 2

[35]

Let \(U,V,\Sigma\) and W be real matrices of appropriate dimensions with W satisfying \(W = W^{T}\), then for all \(\Sigma^{T}\Sigma \le I\),

$$W + U\Sigma V + V^{T}\Sigma^{T}U^{T} < 0, $$

if and only if there exists \(\varepsilon > 0\) such that

$$W + \varepsilon^{ - 1}UU^{T} + \varepsilon V^{T}V < 0. $$

Now, we can give the following result.

Theorem 2

Given positive integers \(d_{m},d_{M},h_{m},h_{M}\), 2D system (8) is mean-square asymptotically stable if there exist a positive definite symmetric matrix \(L = \bigl [{\scriptsize\begin{matrix}{} L^{h} & \cr & L^{v} \end{matrix}} \bigr ]\), \(S = \bigl [{\scriptsize\begin{matrix}{} S^{h} & \cr & S^{v} \end{matrix}} \bigr ]\), \(M_{1} = \bigl [{\scriptsize\begin{matrix}{} M_{1}^{h} & \cr & M_{1}^{v} \end{matrix}} \bigr ]\), \(M_{2} = \bigl [{\scriptsize\begin{matrix}{} M_{2}^{h} & \cr & M_{2}^{v} \end{matrix}} \bigr ]\), \(X = \bigl [{\scriptsize\begin{matrix}{} X^{h} & \cr & X^{v} \end{matrix}} \bigr ]\) matrix Y and positive scalar ε such that the following matrix inequality holds:

$$ \begin{bmatrix} \Theta_{11} & \Theta_{12} & \Theta_{13} & \Theta_{14} \\ & \Theta_{22} & 0 & 0 \\ & & \Theta_{33} & 0 \\ {*} & & & \Theta_{44} \end{bmatrix} < 0, $$

where

$$\begin{aligned}& \Theta_{11} = \begin{bmatrix} \tilde{\psi}_{1} & 0 & X \\ & - S & 0 \\ * & & - M_{2} - X \end{bmatrix},\qquad \Theta_{12} = \begin{bmatrix} L\tilde{A}^{T} + \alpha Y^{T}\bar{B}^{T} & \theta Y^{T}\bar{B}^{T} \\ L\tilde{A}_{d}^{T} & 0 \\ 0 & 0 \end{bmatrix}, \\& \Theta_{13} = \begin{bmatrix} L\tilde{A}^{T} + \alpha Y^{T}\bar{B}^{T} - L & \theta Y^{T}\bar{B}^{T} \\ L\tilde{A}_{d}^{T} & 0 \\ 0 & 0 \end{bmatrix},\qquad \Theta_{14} = \begin{bmatrix} L\bar{F}_{1}^{T}\\ L\bar{F}_{2}^{T} \\ 0 \end{bmatrix}, \\& \Theta_{22} = \begin{bmatrix} \varepsilon \bar{E}^{T}\bar{E} - L &\\ & \varepsilon \bar{E}^{T}\bar{E} - L \end{bmatrix},\qquad \Theta_{33} = \begin{bmatrix} \varepsilon \bar{E}^{T}\bar{E} - H^{ - 2}X & \\ & \varepsilon \bar{E}^{T}\bar{E} - H^{ - 2}X \end{bmatrix}, \\& \Theta_{44} = - \varepsilon I, \qquad \tilde{\psi}_{1} = - L + M_{1} + DS + S - LX^{ - 1}L. \end{aligned}$$

In this case, a suitable ILC law can be selected as \(K = YL^{ - 1}\).

Proof

Pre- and post-multiply inequality (10) by \(\operatorname{diag} \{ L,L,G^{ - 1},L,L,G^{ - 1},G^{ - 1} \}\), by using Schur complements and letting \(L = P^{ - 1}\), the following LMI can be obtained:

$$\begin{aligned} {{\begin{aligned}[b] \begin{bmatrix} \tilde{\psi}_{1} & 0 & L & L\tilde{A}^{T} + \alpha Y^{T}\bar{B}^{T} & \theta Y^{T}\bar{B}^{T} & L\tilde{A}^{T} + \alpha Y^{T}\bar{B}^{T} - L & \theta Y^{T}\bar{B}^{T} \\ & - S & 0 & L\tilde{A}_{d}^{T} & 0 & L\tilde{A}_{d}^{T} & 0 \\ & & - M_{2} - X & 0 & 0 & 0 & 0 \\ & & & - L & 0 & 0 & 0 \\ & & & & - L & 0 & 0 \\ & * & & & & - H^{ - 2}G^{ - 1} & 0 \\ & & & & & & - H^{ - 2}G^{ - 1} \end{bmatrix} < 0, \end{aligned}}} \end{aligned}$$
(30)

that is,

$$\begin{aligned} & \begin{bmatrix} \tilde{\psi}_{1} & 0 & L & L\bar{A}^{T} + \alpha Y^{T}\bar{B}^{T} & \theta Y^{T}\bar{B}^{T} & L\bar{A}^{T} + \alpha Y^{T}\bar{B}^{T} - L & \theta Y^{T}\bar{B}^{T} \\ & - S & 0 & L\bar{A}_{d}^{T} & 0 & L\bar{A}_{d}^{T} & 0 \\ & & - M_{2} - X & 0 & 0 & 0 & 0 \\ & & & - L & 0 & 0 & 0 \\ & & & & - L & 0 & 0 \\ & * & & & & - H^{ - 2}G^{ - 1} & 0 \\ & & & & & & - H^{ - 2}G^{ - 1} \end{bmatrix} \\ &\quad{}+ \begin{bmatrix} L\bar{F}_{1}^{T} \\ L\bar{F}_{2}^{T} \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \Sigma^{T} \begin{bmatrix} 0 \\ 0 \\ 0 \\ \bar{E} \\ 0 \\ \bar{E} \\ 0 \end{bmatrix}^{T} + \left( \begin{bmatrix} L\bar{F}_{1}^{T}\\ L\bar{F}_{2}^{T} \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \Sigma^{T} \begin{bmatrix} 0 \\ 0 \\ 0 \\ \bar{E} \\ 0 \\ \bar{E} \\ 0 \end{bmatrix}^{T} \right)^{T} < 0. \end{aligned}$$
(31)

By Lemma 2, (31) holds if and only if there exists a scalar \(\varepsilon > 0\) such that

$$\begin{aligned} &{{ \begin{bmatrix} \tilde{\psi}_{1} & 0 & L & L\bar{A}^{T} + \alpha Y^{T}\bar{B}^{T} & \theta Y^{T}\bar{B}^{T} & L\bar{A}^{T} + \alpha Y^{T}\bar{B}^{T} - L & \theta Y^{T}\bar{B}^{T} \\ & - S & 0 & L\bar{A}_{d}^{T} & 0 & L\bar{A}_{d}^{T} & 0 \\ & & - M_{2} - X & 0 & 0 & 0 & 0 \\ & & & \varepsilon \bar{E}^{T}\bar{E} - L & 0 & 0 & 0 \\ & & & & \varepsilon \bar{E}^{T}\bar{E} - L & 0 & 0 \\ & * & & & & \varepsilon \bar{E}^{T}\bar{E} - H^{ - 2}G^{ - 1} & 0 \\ & & & & & & \varepsilon \bar{E}^{T}\bar{E} - H^{ - 2}G^{ - 1} \end{bmatrix}}} \\ &{{\quad{}+ \varepsilon^{ - 1} \begin{bmatrix} L\bar{F}_{1}^{T} \\ L\bar{F}_{2}^{T} \\ 0 \\ 0 \\ 0 \\ 0 \\ 0 \end{bmatrix} \begin{bmatrix} \bar{F}_{1}L & \bar{F}_{2}L & 0 & 0 & 0 & 0 & 0 \end{bmatrix} < 0.}} \end{aligned}$$
(32)

Using Schur complements, condition (32) implies that the condition in Theorem 2 holds. □

Remark 3

Notethat the condition in Theorem 2 is no longer LMIs owing to the term \(LX^{ - 1}L\) in \(\tilde{\psi}_{1}\). The available LMI tools cannot be used directly to obtain a feasible solution. However, we can use the idea of iterative algorithms in combination with LMI convex optimization problems as is done in [36, 37]. Then an available control law can be obtained.

Remark 4

It isnoticed thatthe data dropoutmay occur on both system output and control input sides in NCSs. In this paper, we only consider output measurement missing for the sake of convenience, as is also done in most existing works. However, the result in this paper can be extended to the control input signal dropouts.

Remark 5

Since the system performs the same task repetitively, computation complexity is an important issue for ILC systems. The large number of iterations leads to high accuracy but heavy computational burden. How to keep the balance between iteration number and tracking accuracy is an important problem for practical ILC systems. Some efforts can be made to address this issue. Firstly, we can pre-calculate an acceptable tracking error, which satisfies the accuracy requirement of the control objective. If the tracking error reaches the given value, then the iteration process is stopped. Secondly, we can design optimal ILC with monotonic convergence and fast convergence speed such that learning transient behavior is reduced.

4 Illustrative example

In this section, the result is applied to an injection molding process to demonstrate the effectiveness of the proposed ILC design approach. Injection molding process is a typical repetitive process, where key process variables are controlled to follow certain profiles repetitively to ensure the product quality [22, 38]. Injection velocity is a key variable in the filling stage, which is controlled by manipulating the opening of a hydraulic valve. A state-space representation of injection molding velocity control can be described as follows:

$$\left \{ \textstyle\begin{array}{@{}l} x(k,t + 1) = ( A + \Delta A )x(k,t) + ( A_{d} + \Delta A_{d} )x(k,t - d(t)) + Bu(k,t), \\ y(k,t) = Cx(k,t), \end{array}\displaystyle \right . $$

where matrices \(A = [{\scriptsize\begin{matrix}{}1.58 & - 0.59 \cr 1 & 0 \end{matrix}} ]\), \(B = [{\scriptsize\begin{matrix}{}1 \cr 0 \end{matrix}} ]\), \(C = [ 1.69 \quad {-}1.42 ]\). \(\Delta A = E\Sigma F,\Delta A_{d} = E\Sigma F_{2}\) indicate uncertain parameter perturbation with

$$\begin{aligned} &E = \begin{bmatrix} 0.1 & 0.1 \\ 0 & 0 \end{bmatrix},\qquad F_{1} = \begin{bmatrix} 1 & 0 \\ 0 & 1 \end{bmatrix},\qquad F_{2} = \begin{bmatrix} 0.05 \\ 0.05 \end{bmatrix}, \\ &\Sigma = \begin{bmatrix} \xi_{1} & \\ & \xi_{2} \end{bmatrix},\quad \vert \xi_{1} \vert < 1,\vert \xi_{2} \vert < 1, \end{aligned}$$

where \(\xi_{1,2}\) are unknown variables. The values of time-varying delays are chosen as \(0.5 \le d(t) \le 4\) and \(0.1 \le h(k) \le 2\), which is derived from [22].

The desired trajectory is given as follows:

$$y_{d}(t) = \left \{ \textstyle\begin{array}{@{}l@{\quad}l} 0.2 + 0.0025t,& t \le 40, \\ 0.3,& 40 < t \le 300, \\ 0.3 + 0.0025(t - 300),& 300 < t \le 380, \\ 0.5,& 380 < t \le 800. \end{array}\displaystyle \right . $$

In simulation, the initial states are given as \(x_{1}(0,k) = x_{2}(0,k) = 0\) for all k, and the control input is selected as \(u(t,0) = 0\) for all t. The time-varying delay \(d(t)\) changes randomly with \(d(t) \in [0.5\ 4]\). The uncertain parameters \(\xi_{1,2}\) are assumed to vary randomly within \([0\ 1]\) along with both time and iteration direction.

To perform the simulation, we consider the following two cases: (1) \(\bar{\alpha} = 1\), (2) \(\bar{\alpha} = 0.75\). Obviously, Case 1 means that there is no packet dropout. Case 2 means that there is 25 percent dropout. Using Theorem 2, we can obtain the controller \(K = [- 1.95 \ 0.63 \ 0.06]\) for Case 1 and \(K = [- 2.05 \ 0.73 \ 0.08]\) for Case 2. Figure 2 shows the tracking error on the iteration domain for Case 1. It is obvious that the tracking error converges to a small steady state without packet dropouts, where the final error is caused by unpredictable random parameter disturbances. To see the convergence procedure more clearly, system outputs at 1st, 10th and 50th iteration are plotted in Figure 3.

Figure 2
figure 2

Max tracking error on the iteration domain for Case 1.

Figure 3
figure 3

System output profiles for Case 1: (a) 1st iteration; (b) 10th iteration; (c) 50th iteration.

For Case 2, the tracking error on the iteration domain is plotted in Figure 4, and system outputs at the three different iterations are plotted in Figure 5. It can be seen that even though the tracking performance has degraded and significant tracking errors exist in the start iteration due to packet dropouts, better tracking can also be achieved after some iterations. It is thus demonstrated that the proposed method can be used for robust tracking against non-repeatable uncertainties along iteration, random packet dropouts and time-varying delay. Besides, the convergence speed of Case 2 is slower than that of Case 1 as shown in Figures 2 and 4, which illustrates that convergence speed gets slower as dropout rate increases.

Figure 4
figure 4

Max tracking error on the iteration domain for Case 2.

Figure 5
figure 5

System output profiles for Case 2: (a) 1st iteration; (b) 10th iteration; (c) 50th iteration.

5 Conclusions

The robust ILC design is considered for uncertain linear systems with time-varying delays and random packet dropouts. By modeling the packet dropout as an arbitrary stochastic sequence satisfying the Bernoulli binary distribution, the considered system can be transformed into a 2D stochastic system described by the Roesser model with a delay varying in a range. Then, a delay-dependent stability condition is derived in terms of linear matrix inequalities (LMIs), and formulas can be given for the ILC law design. The results on injection velocity control have illustrated the feasibility and effectiveness of the proposed design.