1 Introduction

In the past decades, delay differential equations have been widely used in the fields of economics, physics, and engineering control. Existence, stability, and periodic solutions are studied extensively and there are many interesting and important results; see, for example, [16].

Since Uchiyama [7] and Arimoto [8] put forward the iterative learning control (ILC for short), a wide variety of iterative learning control problems and related issues are proposed and studied in recent decades. For example, ILC for varying reference trajectories [9, 10], ILC for fractional differential systems [11, 12], ILC for impulsive differential systems [13, 14], research on the robustness of ILC [15, 16], and so on.

It is a common phenomenon that time delays exist in many practical engineering issues. However, the prevalence of the phenomenon to the delay caused a lot of practical engineering problems. So the study of the control problem of time-delay system is paid more attention. Some effective methods for studying the iterative learning control for time-delay systems are provided by Sun [1719].

After reviewing the previous works dealing with ILC problems for delay systems, we observe the following facts:

  1. (i)

    A delay system \(\dot{x}(t)=Ax(t)+Bx(t-\tau)\), \(t>0\) is considered mostly as an integral system, where A, B are suitable matrices.

  2. (ii)

    A uniform transition matrix associated with A, B is not derived directly and the structure of solution \(x(t)\) is not well characterized on every subintervals \([0,\tau], \ldots,[n\tau,(n+1)\tau]\), \(n\in N\).

  3. (iii)

    An extended Gronwall inequality is used to derive convergence results instead of applying direct methods.

It is remarkable that Khusainov and Shuklin in [20] initially introduced a delayed exponential matrix method to study the following linear differential equation with one delay term:

$$ \textstyle\begin{cases} \dot{x}(t)=Ax(t)+Bx(t-\tau), \quad t\geq0, \tau\geq0, \\ x(t)=\varphi(t), \quad-\tau\leq t\leq0, \end{cases} $$
(1)

where A and B are matrices and φ is an arbitrary continuously differentiable vector functions. A representation of a solution of system (1) with \(AB=BA\) is given by using a so-called delay exponential matrix, which is defined as follows:

$$ e_{\tau}^{Bt}= \textstyle\begin{cases} \Theta, & t< -\tau, \\ E, & -\tau\leq t< 0, \\ E+Bt+\frac{B^{2}(t-\tau)^{2}}{2}+\cdots+\frac{B^{k}(t-(k-1)\tau)^{k}}{k!}, & (k-1)\tau\leq t< k\tau,k=1,2, \ldots, \end{cases} $$
(2)

for \(\tau>0\), where Θ and E are the n-dimensional zero and identity matrices, respectively.

For more recent contributions on oscillating systems with pure delay, relative controllability of system with pure delay, asymptotic stability of nonlinear multidelay differential equations, one can refer to [2127] and the references therein.

Inspired by the references mentioned above, in this work, we discuss ILC for time-delay systems. More precisely, we study the following linear controlled systems with pure delay:

$$ \textstyle\begin{cases} \dot{x}_{k}(t)=Ax_{k}(t)+Bx_{k}(t- \tau)+u_{k}(t), \quad t\in[0,T], \\ {x}_{k}(t)=\varphi(t), \quad-\tau\leq t\leq0, \tau>0, \\ y_{k}(t)=Cx_{k}(t)+Du_{k}(t), \end{cases} $$
(3)

where T denotes pre-fixed iteration domain length with \(T=N\tau\) and \(N\in \mathbb {N}\). Let \(\varphi\in\mathcal{C}_{\tau}^{1}:=\mathcal{C}^{1}([-\tau, 0], \mathbb {R}^{n})\), A and B be two \(n\times n\) matrices such that \(AB=BA\) and C, D be two \(m\times n\) matrices, k denotes the kth learning iteration, the variables \(x_{k}, u_{k}\in \mathbb {R}^{n}\) and \(y_{k}\in \mathbb {R}^{m}\) denote state, input, and output, respectively. By [20], Corollary 2.2, we derive that the state \(x_{k}(\cdot)\) has the form

$$ \begin{aligned}[b] x_{k}(t)&=e^{A(t+\tau)}e_{\tau}^{B_{1}t} \varphi(-\tau) + \int_{-\tau}^{0}e^{A(t-\tau-s)}e_{\tau}^{B_{1}(t-\tau-s)}e^{A\tau } \bigl[\varphi'(s)-A\varphi(s) \bigr]\,ds \\ &\quad {}+ \int_{0}^{t}e^{A(t-\tau-s)}e_{\tau}^{B_{1}(t-\tau-s)}e^{A\tau}u_{k}(s)\,ds, \end{aligned} $$
(4)

where \(e_{\tau}^{Bt}\) is defined in (2) and \(B_{1}=e^{-A\tau}B\).

By introducing \(e_{\tau}^{Bt}\) for (2), we state some possible advantages of our approach as follows.

  1. (i)

    The structure of solution \(x(t)\) is characterized on every subintervals.

  2. (ii)

    A direct method is explored to deal with ILC problems by using mathematical analysis tools.

Let \(y_{d}\) be a desired trajectory and set \(e_{k}=y_{d}-y_{k}\), which denotes output error and \(\delta u_{k}=u_{k+1}-u_{k}\).

For the system (3), we consider the open-loop P-type ILC updating law

$$ u_{k+1}(t)=u_{k}(t)+P_{o}e_{k}(t). $$
(5)

For the system (3) with \(D=\Theta\), we consider the open-loop D-type ILC updating law

$$ u_{k+1}(t)=u_{k}(t)+D_{o} \dot{e}_{k}(t), $$
(6)

where \(P_{o}\) and \(D_{o}\in \mathbb {R}^{n\times m}\) are learning gain matrices.

The main objective of this paper is to use delayed exponential matrix to generate the control input \(u_{k}\) such that the time-delay system output \(y_{k}\) is tracking the iteratively varying reference trajectories \(y_{d}\) as accurately as possible when \(k\rightarrow\infty\) uniformly on \([0,T]\) in the sense of the λ-norm by adopting P-type ILC and D-type ILC.

Here we point out that our method is different from the method given in the previous reference, however, we obtain the same convergence results. Our method relies on a direct formula solution, so it is constructive.

The rest of this paper is organized as follows. In Section 2, we give some notations, concepts, and lemmas. In Sections 3 and 4, we give convergence results of P-type ILC and D-type for system (3). Examples are given in Section 5 to demonstrate the applicability of our main results.

2 Preliminaries

Let \(J\subset \mathbb {R}\) be a finite interval and \(L(\mathbb {R}^{n})\) be the space of bounded linear operators in \(\mathbb {R}^{n}\). Denote by \(\mathcal{C}(J, \mathbb {R}^{n})\) the Banach space of vector-value continuous functions from \(J\to \mathbb {R}^{n}\) endowed with the ∞-norm \(\Vert x\Vert =\max_{t\in J}\vert x(t)\vert \) for a norm \(\vert \cdot \vert \) on \(\mathbb {R}^{n}\). We also consider on \(\mathcal{C}(J, \mathbb {R}^{n})\) a λ-norm \(\Vert x\Vert _{\lambda}=\sup_{t\in J} \{ e^{-\lambda t}\vert x(t)\vert \}\), \(\lambda>0\). We introduce a space \(\mathcal{C}^{1}(J, \mathbb {R}^{n})=\{x\in\mathcal{C}(J, \mathbb {R}^{n}): \dot{x}\in\mathcal{C}(J, \mathbb {R}^{n}) \}\). For a matrix \(A : \mathbb {R}^{n}\to \mathbb {R}^{n}\), we consider its matrix norm \(\Vert A\Vert =\max_{\vert x\vert =1}\vert Ax\vert \) generated by \(\vert \cdot \vert \).

Lemma 2.1

see [28], 2.2.8, Chapter 2

Let \(A\in L(\mathbb {R}^{n})\). For a given \(\epsilon>0\), there is a norm \(\vert \cdot \vert \) on \(\mathbb {R}^{n}\) such that

$$\Vert A\Vert \leq\rho(A)+\epsilon, $$

where \(\rho(A)\) denotes the spectral radius of the matrix A.

Next, we give an alternative formula to compute the solution of linear system with pure delay, which is a direct corollary of [20], Corollary 2.2.

Lemma 2.2

Let \(f:J\rightarrow \mathbb {R}^{n}\) be a continuous function. The solution \(x\in \mathcal{C}^{1}(J,\mathbb {R}^{n})\) of

$$ \textstyle\begin{cases} \dot{x}(t)=Ax(t)+Bx(t-\tau)+f(t), \quad t>0, \tau\geq0, \\ {x}(t)=\varphi(t), \quad-\tau\leq t\leq0, \end{cases} $$
(7)

has the form

$$\begin{aligned} x(t) =&e^{A(t+\tau)}e_{\tau}^{B_{1}t}\varphi(- \tau) + \int_{-\tau}^{0}e^{A(t-\tau-s)}\sum _{l=0}^{j-1}B_{1}^{l} \frac {(t-l\tau-s)^{l}}{l!}e^{A\tau} \bigl[\varphi'(s)-A\varphi(s) \bigr]\,ds \\ &{}+ \int_{-\tau}^{t-j\tau}e^{A(t-\tau-s)}B_{1}^{j} \frac{(t-j\tau -s)^{j}}{j!}e^{A\tau} \bigl[\varphi'(s)-A\varphi(s) \bigr]\,ds \\ &{}+\sum_{i=0}^{j-1} \int_{0}^{t-i\tau}e^{A(t-\tau -s)}B_{1}^{i} \frac{(t-i\tau-s)^{i}}{i!}e^{A\tau}f(s)\,ds, \end{aligned}$$
(8)

where A and B are commutative, \(B_{1}=e^{-A\tau}B\) and \((j-1)\tau \leq t< j\tau\), \(j=1,2,\ldots,N\).

Proof

According to the formula (21) in [20], we know the solution of system (7) has the form

$$\begin{aligned} x(t) =&e^{A(t+\tau)}e_{\tau}^{B_{1}t}\varphi(- \tau) + \int_{-\tau}^{0}e^{A(t-\tau-s)}e_{\tau}^{B_{1}(t-\tau-s)}e^{A\tau } \bigl[\varphi'(s)-A\varphi(s) \bigr]\,ds \\ &{}+ \int_{0}^{t}e^{A(t-\tau-s)}e_{\tau}^{B_{1}(t-\tau-s)}e^{A\tau}f(s)\,ds. \end{aligned}$$
(9)

Without loss of generality, we consider \((j-1)\tau\leq t< j\tau\), \(j=1, 2,\ldots, N\).

Next, we submit the formula of delayed matrix exponential (2) to (9) to prove the result. We divide our proof into two steps.

Step 1. We prove that

$$\begin{aligned}& \int_{-\tau}^{0}e^{A(t-\tau-s)}e_{\tau}^{B_{1}(t-\tau-s)}e^{A\tau } \bigl[\varphi'(s)-A\varphi(s) \bigr]\,ds \\ & \quad = \int_{-\tau}^{0}e^{A(t-\tau-s)}\sum _{l=0}^{j-1}B_{1}^{l} \frac{(t-l\tau-s)^{l}}{l!}e^{A\tau} \bigl[\varphi'(s)-A\varphi(s) \bigr]\,ds \\ & \quad\quad{} + \int_{-\tau}^{t-j\tau}e^{A(t-\tau-s)}B_{1}^{j} \frac{(t-j\tau -s)^{j}}{j!}e^{A\tau} \bigl[\varphi'(s)-A\varphi(s) \bigr]\,ds. \end{aligned}$$
(10)

Due to the fact that \(-\tau< s<0\), we obtain \(t-\tau< t-\tau-s< t\) and \((j-2)\tau< t-\tau< t-\tau-s< t< j\tau\). When \(-\tau< s< t-j\tau\), we have \((j-1)\tau< t-\tau-s< t< j\tau\). When \(t-j\tau< s<0\), we have \((j-2)\tau< t-\tau< t-\tau-s<(j-1)\tau\). Hence

$$\begin{aligned}& \int_{-\tau}^{0}e^{A(t-\tau-s)}e_{\tau}^{B_{1}(t-\tau-s)}e^{A\tau } \bigl[\varphi'(s)-A\varphi(s) \bigr]\,ds \\ & \quad = \int_{-\tau}^{t-j\tau}e^{A(t-\tau-s)}\sum _{l=0}^{j}B_{1}^{l} \frac{(t-l\tau-s)^{l}}{l!}e^{A\tau} \bigl[\varphi'(s)-A\varphi(s) \bigr]\,ds \\ & \quad\quad{} + \int_{t-j\tau}^{0}e^{A(t-\tau-s)}\sum _{l=0}^{j-1}B_{1}^{l} \frac{(t-l\tau-s)^{l}}{l!}e^{A\tau} \bigl[\varphi'(s)-A\varphi(s) \bigr]\,ds \\ & \quad = \int_{-\tau}^{0}e^{A(t-\tau-s)}\sum _{l=0}^{j-1}B_{1}^{l} \frac{(t-l\tau-s)^{l}}{l!}e^{A\tau} \bigl[\varphi'(s)-A\varphi(s) \bigr]\,ds \\ & \quad\quad{} + \int_{-\tau}^{t-j\tau}e^{A(t-\tau-s)}B_{1}^{j} \frac{(t-j\tau -s)^{j}}{j!}e^{A\tau} \bigl[\varphi'(s)-A\varphi(s) \bigr]\,ds. \end{aligned}$$

Step 2. We check that

$$ \int_{0}^{t}e^{A(t-\tau-s)}e_{\tau}^{B_{1}(t-\tau-s)}e^{A\tau}f(s)\,ds =\sum_{i=0}^{j-1} \int_{0}^{t-i\tau}e^{A(t-\tau -s)}B_{1}^{i} \frac{(t-i\tau-s)^{i}}{i!}e^{A\tau}f(s)\,ds. $$
(11)

Due to the fact that \(0< s< t\), we obtain \(-\tau< t-\tau-s< t-\tau\). When \(t-(i+1)\tau< s< t-i\tau\), we have \((i-1)\tau< t-\tau-s< i\tau\), \(i=0,1,\ldots, j-2\). When \(0< s< t-(j-1)\tau\), we have \((j-2)\tau< t-\tau-s< t-\tau <(j-1)\tau\). Hence

$$\begin{aligned}& \int_{0}^{t}e^{A(t-\tau-s)}e_{\tau}^{B_{1}(t-\tau-s)}e^{A\tau }f(s)\,ds \\& \quad = \sum_{i=0}^{j-2} \int_{t-(i+1)\tau}^{t-i\tau}e^{A(t-\tau -s)}e_{\tau}^{B_{1}(t-\tau-s)}e^{A\tau}f(s)\,ds + \int_{0}^{t-(j-1)\tau}e^{A(t-\tau-s)}e_{\tau}^{B_{1}(t-\tau -s)}e^{A\tau}f(s)\,ds \\& \quad = \sum_{i=0}^{j-2} \int_{t-(i+1)\tau}^{t-i\tau}e^{A(t-\tau -s)}\sum _{l=0}^{i}B_{1}^{l} \frac{(t-l\tau-s)^{l}}{l!}e^{A\tau }f(s)\,ds \\& \quad\quad{} + \int_{0}^{t-(j-1)\tau}e^{A(t-\tau-s)}\sum _{l=0}^{j-1}B_{1}^{l} \frac{(t-l\tau-s)^{l}}{l!}e^{A\tau}f(s)\,ds \\& \quad = \sum_{i=0}^{j-2}\sum _{l=0}^{i} \int_{t-(i+1)\tau }^{t-i\tau}e^{A(t-\tau-s)}B_{1}^{l} \frac{(t-l\tau-s)^{l}}{l!}e^{A\tau }f(s)\,ds \\& \quad\quad{} +\sum_{l=0}^{j-1} \int_{0}^{t-(j-1)\tau}e^{A(t-\tau -s)}B_{1}^{l} \frac{(t-l\tau-s)^{l}}{l!}e^{A\tau}f(s)\,ds \\& \quad = \sum_{l=0}^{j-2} \int_{t-(j-1)\tau}^{t-l\tau}e^{A(t-\tau -s)}B_{1}^{l} \frac{(t-l\tau-s)^{l}}{l!}e^{A\tau}f(s)\,ds \\& \quad\quad{} +\sum_{l=0}^{j-1} \int_{0}^{t-(j-1)\tau}e^{A(t-\tau -s)}B_{1}^{l} \frac{(t-l\tau-s)^{l}}{l!}e^{A\tau}f(s)\,ds \\& \quad = \sum_{i=0}^{j-2} \int_{0}^{t-i\tau}e^{A(t-\tau -s)}B_{1}^{i} \frac{(t-i\tau-s)^{i}}{i!}e^{A\tau}f(s)\,ds \\& \quad\quad{} + \int_{0}^{t-(j-1)\tau}e^{A(t-\tau-s)}B_{1}^{j-1} \frac {(t-(j-1)\tau-s)^{j-1}}{(j-1)!}e^{A\tau}f(s)\,ds \\& \quad = \sum_{i=0}^{j-1} \int_{0}^{t-i\tau}e^{A(t-\tau -s)}B_{1}^{i} \frac{(t-i\tau-s)^{i}}{i!}e^{A\tau}f(s)\,ds. \end{aligned}$$

Linking (9), (10), and (11), one can get the result (8). The proof is finished. □

3 Convergence analysis of P-type

In this section, we give the first convergence result of P-type.

Theorem 3.1

Let \(y_{d}(t)\), \(t\in[0,T]\) be a desired trajectory for system (3). If \(\rho(E-DP_{o})<1\), then the P-type ILC law (5) guarantees \(\lim_{k\rightarrow\infty }y_{k}(t)=y_{d}(t)\) uniformly on \([0,T]\).

Proof

Without loss of generality, we consider \((j-1)\tau\leq t< j\tau\), \(j=1,2,\ldots, N\). Linking (3) and (4), we have

$$\begin{aligned} e_{k+1}(t)-e_{k}(t) =&y_{k}(t)-y_{k+1}(t) \\ =&-C \int_{0}^{t}e^{A(t-\tau-s)}e_{\tau}^{B_{1}(t-\tau-s)}e^{A\tau } \delta u_{k}(s)\,ds -D\delta u_{k}(t). \end{aligned}$$

According to (5) and Lemma 2.2, we have

$$ e_{k+1}(t) =(E-DP_{o})e_{k}(t) -C \sum_{i=0}^{j-1} \int_{0}^{t-i\tau}e^{A(t-\tau -s)}B_{1}^{i} \frac{(t-i\tau-s)^{i}}{i!}e^{A\tau}\delta u_{k}(s)\,ds. $$
(12)

Further, by Lemma 2.1 we know that, for a given \(\epsilon>0\), there is a norm \(\vert \cdot \vert \) on \(\mathbb {R}^{n}\) such that

$$\Vert E-DP_{o}\Vert \leq\rho(E-DP_{o})+\epsilon. $$

So by (12), we have

$$\begin{aligned} \bigl\vert e_{k+1}(t) \bigr\vert \leq& \bigl[\rho(E-DP_{o})+ \epsilon\bigr] \bigl\vert e_{k}(t) \bigr\vert \\ &{}+\Vert C\Vert \sum_{i=0}^{j-1} \int_{0}^{t-i\tau} \bigl\Vert e^{A(t-\tau-s)} \bigr\Vert \bigl\Vert B_{1}^{i} \bigr\Vert \frac{(t-i\tau-s)^{i}}{i!} \bigl\Vert e^{A\tau } \bigr\Vert \bigl\vert \delta u_{k}(s) \bigr\vert \,ds. \end{aligned}$$

Hence, we obtain

$$\begin{aligned} \bigl\vert e_{k+1}(t) \bigr\vert \leq& \bigl[ \rho(E-DP_{o})+\epsilon\bigr] \bigl\vert e_{k}(t) \bigr\vert \\ &{}+\Vert C\Vert \sum_{i=0}^{j-1} \int_{0}^{t-i\tau}e^{\Vert A\Vert (t-s)}\Vert B_{1}\Vert ^{i}\frac{(t-i\tau-s)^{i}}{i!} \bigl\vert \delta u_{k}(s) \bigr\vert \,ds. \end{aligned}$$
(13)

For fixed i, \(i=0,1,\ldots,j-1\), we have

$$\begin{aligned}& \int_{0}^{t-i\tau}e^{\Vert A\Vert (t-s)}\Vert B_{1}\Vert ^{i}\frac{(t-i\tau-s)^{i}}{i!} \bigl\vert \delta u_{k}(s) \bigr\vert \,ds \\& \quad =\frac{\Vert B_{1}\Vert ^{i}}{i!} \int_{0}^{t-i\tau }e^{\Vert A\Vert (t-s)}(t-i \tau-s)^{i} \bigl\vert \delta u_{k}(s) \bigr\vert \,ds \\& \quad \leq\frac{\Vert B_{1}\Vert ^{i}}{i!}e^{\Vert A\Vert t} \int_{0}^{t-i\tau}e^{(\lambda-\Vert A\Vert )s}(t-i \tau-s)^{i}\,ds \Vert \delta u_{k}\Vert _{\lambda}. \end{aligned}$$
(14)

For any \(\lambda>\Vert A\Vert \), we apply integration by parts via mathematical induction to derive

$$\begin{aligned}& \int_{0}^{t-i\tau}e^{(\lambda-\Vert A\Vert )s}(t-i \tau-s)^{i}\,ds \\& \quad =\frac{1}{\lambda-\Vert A\Vert } \int_{0}^{t-i\tau}(t-i\tau-s)^{i}\,de^{(\lambda-\Vert A\Vert )s} \\& \quad = -\frac{1}{\lambda-\Vert A\Vert }(t-i\tau)^{i}+\frac {i}{(\lambda-\Vert A\Vert )^{2}} \int_{0}^{t-i\tau }(t-i\tau-s)^{i-1}\,de^{(\lambda-\Vert A\Vert )s} \\& \quad = -\frac{1}{\lambda-\Vert A\Vert }(t-i\tau)^{i}-\frac {i}{(\lambda-\Vert A\Vert )^{2}}(t-i \tau)^{i-1} \\& \quad\quad{} +\frac{i(i-1)}{(\lambda-\Vert A\Vert )^{3}} \int_{0}^{t-i\tau}(t-i\tau-s)^{i-2}\,de^{(\lambda-\Vert A\Vert )s} \\& \quad\quad{} \vdots \\& \quad = -\frac{1}{\lambda-\Vert A\Vert }(t-i\tau)^{i}-\frac {i}{(\lambda-\Vert A\Vert )^{2}}(t-i \tau)^{i-1}-\cdots \\& \quad\quad{} -\frac{i!}{(\lambda-\Vert A\Vert )^{i}}(t-i\tau)+\frac {i!}{(\lambda-\Vert A\Vert )^{i+1}} \bigl(e^{(\lambda -\Vert A\Vert )(t-i\tau)}-1 \bigr) \\& \quad = -\sum_{p=0}^{i}\frac{i!(t-i\tau)^{i-p}}{(i-p)!(\lambda -\Vert A\Vert )^{p+1}} + \frac{i!e^{(\lambda-\Vert A\Vert )(t-i\tau)}}{(\lambda -\Vert A\Vert )^{i+1}}. \end{aligned}$$
(15)

Linking (13), (14), and (15), it is not difficult to get

$$\begin{aligned} \bigl\vert e_{k+1}(t) \bigr\vert \leq& \bigl[ \rho(E-DP_{o})+\epsilon\bigr] \bigl\vert e_{k}(t) \bigr\vert +\Vert C\Vert \sum_{i=0}^{j-1} \frac{\Vert B_{1}\Vert ^{i}}{i!}e^{\Vert A\Vert t} \\ &{}\times\Biggl[-\sum_{p=0}^{i} \frac{i!(t-i\tau )^{i-p}}{(i-p)!(\lambda-\Vert A\Vert )^{p+1}} +\frac{i!e^{(\lambda -\Vert A\Vert )(t-i\tau)}}{(\lambda -\Vert A\Vert )^{i+1}} \Biggr]\Vert \delta u_{k} \Vert _{\lambda}. \end{aligned}$$
(16)

By (16) via (5), we have

$$\begin{aligned} \bigl\vert e_{k+1}(t) \bigr\vert e^{-\lambda t} \leq& \bigl[ \rho(E-DP_{o})+\epsilon\bigr] \bigl\vert e_{k}(t) \bigr\vert e^{-\lambda t}+\Vert C\Vert \Vert P_{o}\Vert \sum _{i=0}^{j-1}\frac{\Vert B_{1}\Vert ^{i}}{i!}e^{(\Vert A\Vert -\lambda)t} \\ &{}\times\Biggl[-\sum_{p=0}^{i} \frac{i!(t-i\tau )^{i-p}}{(i-p)!(\lambda-\Vert A\Vert )^{p+1}} +\frac{i!e^{(\lambda -\Vert A\Vert )(t-i\tau)}}{(\lambda -\Vert A\Vert )^{i+1}} \Biggr]\Vert e_{k}\Vert _{\lambda} \end{aligned}$$

and, taking the λ-norm, we arrive at

$$\begin{aligned} \Vert e_{k+1}\Vert _{\lambda} \leq& \bigl[ \rho(E-DP_{o})+\epsilon\bigr]\Vert e_{k}\Vert _{\lambda}+\Vert C\Vert \Vert P_{o}\Vert \sum _{i=0}^{j-1}\frac{\Vert B_{1}\Vert ^{i}}{i!} \\ &{}\times\Biggl[\sum_{p=0}^{i} \frac{i!\max\{1,T^{p}\} }{(i-p)!(\lambda-\Vert A\Vert )^{p+1}} +\frac{i!}{(\lambda -\Vert A\Vert )^{i+1}} \Biggr]\Vert e_{k}\Vert _{\lambda}. \end{aligned}$$

Since \(\rho(E-DP_{o})<1\), for any \(\epsilon\in(0,\frac{1-\rho (E-DP_{o})}{4} )\) and \(\lambda>\Vert A\Vert \) sufficiently large, we have

$$\Vert e_{k+1}\Vert _{\lambda}\leq\bigl(\rho (E-DP_{o})+2\epsilon\bigr)\Vert e_{k}\Vert _{\lambda}, $$

which implies \(\lim_{k\rightarrow\infty} \Vert e_{k}\Vert _{\lambda}=0\), since \(\rho(E-DP_{o})+2\epsilon<1\). In addition, \(\Vert e_{k}\Vert \leq e^{\lambda T}\Vert e_{k}\Vert _{\lambda}\). Hence \(\lim_{k\rightarrow\infty} \Vert e_{k}\Vert =0\). The proof is completed. □

Remark 3.2

We use a delayed matrix exponential method to obtain convergence of the P-type ILC algorithm. Next, we applied the norm \(\Vert \cdot \Vert _{\lambda}\) just for a technical reason to get uniform convergence only under condition \(\rho(E-DP_{o})<1\) in the end of the above proof. Moreover, fixing \(\epsilon\in(0,\frac {1-\rho(E-DP_{o})}{4} )\), the smallest suitable \(\lambda>\Vert A\Vert \) is given by the equation

$$ \Vert C\Vert \Vert P_{o}\Vert \sum _{i=0}^{N-1}\frac{\Vert B_{1}\Vert ^{i}}{i!} \Biggl[\sum _{p=0}^{i}\frac{i!\max\{1,T^{p}\}}{(i-p)!(\lambda-\Vert A\Vert )^{p+1}} +\frac{i!}{(\lambda-\Vert A\Vert )^{i+1}} \Biggr]=\epsilon, $$
(17)

which is rather awkward to solve. On the other hand, (17) has a unique solution for any \(\epsilon>0\). Moreover, it may have any \(\lambda>\Vert A\Vert \) as a solution by varying \(\Vert C\Vert \Vert P_{o}\Vert \).

4 Convergence analysis of D-type

In this section, we discuss the ILC convergence of D-type.

Theorem 4.1

Let \(y_{d}(t)\), \(t\in[0, T]\), be a desired trajectory for system (3) with \(D=\Theta\). If \(\rho(E-CD_{o})<1\) and \(e_{k}(0)=0\), \(k=1,2,\ldots\) , then the D-type ILC law (6) guarantees \(\lim_{k\rightarrow\infty}y_{k}(t)=y_{d}(t)\) uniformly on \([0,T]\).

Proof

First, we consider \((j-1)\tau\leq t< j\tau\), \(j=2,3,\ldots, N\). By (4) and (6), we have

$$\begin{aligned} \dot{e}_{k+1}(t)-\dot{e}_{k}(t) =&C \bigl[ \dot{x}_{k}(t)-\dot{x}_{k+1}(t) \bigr] \\ =&CA \bigl[x_{k}(t)-x_{k+1}(t) \bigr]+CB \bigl[x_{k}(t- \tau)-x_{k+1}(t-\tau) \bigr]+C \bigl[u_{k}(t)-u_{k+1}(t) \bigr] \\ =&-CA \int_{0}^{t}e^{A(t-\tau-s)}e_{\tau}^{B_{1}(t-\tau-s)}e^{A\tau } \delta u_{k}(s)\,ds \\ &{}-CB \int_{0}^{t-\tau}e^{A(t-2\tau-s)}e_{\tau}^{B_{1}(t-2\tau -s)}e^{A\tau} \delta u_{k}(s)\,ds-CD_{o}\dot{e}_{k}(t). \end{aligned}$$

So we have

$$\begin{aligned} \dot{e}_{k+1}(t) =&(E-CD_{o})\dot{e}_{k}(t) \\ &{}-CA \int_{0}^{t}e^{A(t-\tau -s)}e_{\tau}^{B_{1}(t-\tau-s)}e^{A\tau} \delta u_{k}(s)\,ds \\ &{}-CB \int_{0}^{t-\tau}e^{A(t-2\tau-s)}e_{\tau}^{B_{1}(t-2\tau -s)}e^{A\tau} \delta u_{k}(s)\,ds. \end{aligned}$$

Similar to the proof of Theorem 3.1, one can apply Lemma 2.2 to derive that

$$\begin{aligned} \dot{e}_{k+1}(t) =&(E-CD_{o})\dot{e}_{k}(t) \\ &{}-CA\sum_{i=1}^{j} \int_{0}^{t-(i-1)\tau}e^{A(t-\tau-s)}B_{1}^{i-1} \frac {(t-(i-1)\tau-s)^{i-1}}{(i-1)!}e^{A\tau}\delta u_{k}(s)\,ds \\ &{}-CB\sum_{l=2}^{j} \int_{0}^{t-(l-1)\tau}e^{A(t-2\tau -s)}B_{1}^{l-2} \frac{(t-(l-1)\tau-s)^{l-2}}{(l-2)!}e^{A\tau}\delta u_{k}(s)\,ds. \end{aligned}$$

Obviously, we have

$$\begin{aligned}& \bigl\vert \dot{e}_{k+1}(t) \bigr\vert e^{-\lambda t} \\& \quad \leq \bigl[\rho(E-CD_{o})+\epsilon\bigr] \bigl\vert \dot {e}_{k}(t) \bigr\vert e^{-\lambda t} \\& \quad\quad{} +\Vert CA\Vert \sum_{i=1}^{j} \frac{\Vert B_{1}\Vert ^{i-1}}{(i-1)!}e^{(\Vert A\Vert -\lambda )t} \int_{0}^{t-(i-1)\tau}e^{\Vert A\Vert (\lambda -s)} \bigl(t-(i-1)\tau-s \bigr)^{i-1}\,ds\Vert \delta u_{k}\Vert _{\lambda} \\& \quad\quad{} +\Vert CB\Vert \sum_{l=2}^{j} \frac{\Vert B_{1}\Vert ^{l-2}}{(l-2)!}e^{(\Vert A\Vert -\lambda )t} \int_{0}^{t-(l-1)\tau}e^{\Vert A\Vert (\lambda-\tau -s)} \bigl(t-(l-1)\tau-s \bigr)^{l-2}\,ds \Vert \delta u_{k}\Vert _{\lambda}. \end{aligned}$$
(18)

In analogy to the computation in (14)-(16), inequality (18) becomes

$$ \bigl\vert \dot{e}_{k+1}(t) \bigr\vert e^{-\lambda t} \leq\bigl[\rho(E-CD_{o})+\epsilon\bigr] \bigl\vert \dot{e}_{k}(t) \bigr\vert e^{-\lambda t}+ (W_{1}+W_{2} )\Vert D_{o}\Vert \Vert \dot{e}_{k}\Vert _{\lambda}, $$
(19)

where

$$W_{1}=\Vert CA\Vert \sum_{i=1}^{j} \frac{\Vert B_{1}\Vert ^{i-1}}{(i-1)!} \Biggl[\sum_{p=1}^{i} \frac{(i-1)!T^{i-p}}{(i-p)!(\lambda-\Vert A\Vert )^{p}} +\frac {(i-1)!}{(\lambda-\Vert A\Vert )^{i}} \Biggr] $$

and

$$W_{2}=\Vert CB\Vert \sum_{l=2}^{j} \frac{\Vert B_{1}\Vert ^{l-2}}{(l-2)!} \Biggl[\sum_{q=2}^{l} \frac{(l-2)!T^{l-q}}{(l-q)!(\lambda-\Vert A\Vert )^{q-1}} +\frac {(l-2)!}{(\lambda-\Vert A\Vert )^{l-1}} \Biggr]. $$

If \(j=1\), which means \(0\leq t<\tau\), then by (4) and (6), we have

$$\begin{aligned} \dot{e}_{k+1}(t)-\dot{e}_{k}(t) =&C \bigl[ \dot{x}_{k}(t)-\dot{x}_{k+1}(t) \bigr] \\ =&CA \bigl[x_{k}(t)-x_{k+1}(t) \bigr]+CB \bigl[x_{k}(t- \tau)-x_{k+1}(t-\tau) \bigr]+C \bigl[u_{k}(t)-u_{k+1}(t) \bigr] \\ =&-CA \int_{0}^{t}e^{A(t-\tau-s)}e_{\tau}^{B_{1}(t-\tau-s)}e^{A\tau } \delta u_{k}(s)\,ds-CD_{o}\dot{e}_{k}(t). \end{aligned}$$

We can repeat the above arguments to arrive at (19) with \(W_{2}=0\) and \(W_{1}=\frac{2\Vert CA\Vert }{\lambda-\Vert A\Vert }\). Hence (19) holds on the whole \([0,T]\), which implies

$$\Vert \dot{e}_{k+1}\Vert _{\lambda}\leq\bigl[\rho (E-CD_{o})+\epsilon+ (W_{1}+W_{2} )\Vert D_{o}\Vert \bigr]\Vert \dot{e}_{k}\Vert _{\lambda}. $$

Note that by \(\rho(E-CD_{o})<1\), we obtain \(\lim_{k\rightarrow \infty} \Vert \dot{e}_{k}\Vert _{\lambda}=0\). Thus, \(\Vert \dot{e}_{k}\Vert \leq\dot{e}^{\lambda T}\Vert \dot{e}_{k}\Vert _{\lambda}\). So \(\Vert \dot {e}_{k}\Vert \rightarrow0\) as \(k\rightarrow\infty\). Due to the fact that \(e_{k}(0)=0\), we get \(\Vert e_{k}\Vert \le T\Vert \dot{e}_{k}\Vert \), consequently we find that \(\Vert e_{k}\Vert \rightarrow0\) as \(k\rightarrow\infty\). The proof is completed. □

Remark 4.2

Since

$$e_{k}(0)=y_{d}(0)-y_{k}(0)=y_{d}(0)-Cx_{k}(0)=y_{d}(0)-C \varphi(0), $$

we need \(y_{d}(0)=C\varphi(0)\). This gives a compatibility condition for φ. Since \(y_{d}(0)\) is arbitrary, we need C to be surjective.

5 Simulation examples

In this section, four numerical examples are presented to demonstrate the validity of the designed method. In order to simulate the tracking errors of trajectories, we adopt for simplicity \(L^{2}\)-norm in our simulations. This \(L^{2}\)-norm can be used, since by the fact that \(\Vert e_{k}\Vert \leq e^{\lambda T}\Vert e_{k}\Vert _{\lambda}\) for a suitable \(\lambda>\Vert A\Vert \), we obtain \(\Vert e_{k}\Vert \rightarrow0\) as \(k\rightarrow \infty\), i.e., \(\lim_{k\rightarrow\infty}\sup_{t\in J}\vert e_{k}(t)\vert =0\), which yields \(e_{k}\to0\) in \(L^{2}(J,\mathbb {R}^{n})\).

Example 5.1

Consider

$$ \textstyle\begin{cases} \dot{x}_{k}(t)=x_{k}(t)+x_{k}(t-0.5)+u_{k}(t),\quad x(t)\in \mathbb {R}, t\geq0, \\ x_{k}(t)=t, \quad-0.5\leq t\leq0, \\ y_{k}(t)=x_{k}(t)+0.3u_{k}(t), \end{cases} $$
(20)

and P-type ILC

$$u_{k+1}(t)=u_{k}(t)+e_{k}(t). $$

The original reference trajectory is

$$y_{d}(t)= \textstyle\begin{cases} 2 \sin(4\pi t), &t\in[0,0.4], \\ 2\sin(4\pi t)+5\cos(4\pi t)+10, &t\in[0.4,1]. \end{cases} $$

Set \(t\in[0,1]\), \(\tau=0.5\), \(\varphi(t)=t\). For \(n=m=1\), \(A=1\), \(B=1\), \(C=1\), and \(D=0.3\). It is not difficult to find that \(B_{1}=e^{-A\tau}B=e^{-0.5}\) and

$$e_{0.5}^{t}= \textstyle\begin{cases} 1+e^{-0.5}t, &t\in[0, 0.5], \\ 1+e^{-0.5}t+e^{-1}\frac{(t-0.5)^{2}}{2!}, & t\in[0.5, 1]. \end{cases} $$

Next, we set \(P_{o}=1\). Obviously, \(\rho(1-DP_{o})=0.7<1\). Thus, all conditions of Theorem 3.1 are satisfied, so \(y_{k}(t)\) uniformly converges to \(y_{d}(t)\), for \(t\in[0,1]\).

The upper figure of Figure 1 shows the output \(y_{k}\) of equation (20) of the 10th iterations and the reference trajectory \(y_{d}\). The lower figure of Figure 1 shows the \(L^{2}\)-norm of the tracking error (see also Table 1) in each iteration.

Figure 1
figure 1

The system output and the tracking error for ( 20 ).

Table 1 The tracking error of each iteration for ( 20 )

Example 5.2

Consider

$$ \textstyle\begin{cases} \dot{x}_{k}(t)=x_{k}(t)+x_{k}(t-0.5)+u_{k}(t),\quad x(t)\in \mathbb {R}, t\geq0, \\ x_{k}(t)=t,\quad -0.5\leq t\leq0, \\ y_{k}(t)=x_{k}(t), \end{cases} $$
(21)

and D-type ILC

$$u_{k+1}(t)=u_{k}(t)+0.3\dot{e}_{k}(t). $$

The original reference trajectory is the same as for Example 5.1. Set \(t\in[0,1]\), \(\tau=0.5\), \(\varphi(t)=t\). For \(n=m=1\), \(A=1\), \(B=1\), \(C=1\) and \(D=0\). It is not difficult to find that \(B_{1}\) and \(e_{0.5}^{t}\) are the same as for Example 5.1. Next, we set \(D_{o}=0.3\). Obviously, \(\rho(1-CD_{o})=0.7<1\). Thus, all conditions of Theorem 4.1 are satisfied.

The upper figure of Figure 2 shows the equation (21) output \(y_{k}\) of the 20th iterations and the reference trajectory \(y_{d}\). The lower figure of Figure 2 shows the \(L^{2}\)-norm of the tracking error (see also Table 2) in each iteration.

Figure 2
figure 2

The system output and the tracking error for ( 21 ).

Table 2 The tracking error of each iteration for ( 21 )

Example 5.3

Consider

$$ \textstyle\begin{cases} \dot{x}_{k}(t)=x_{k}(t)+x_{k}(t-0.5)+u_{k}(t),\quad x_{k}(t),u_{k}(t)\in \mathbb {R}^{2}, t\geq0, \\ x_{k}(t)= (e^{t}, e^{t} )^{T},\quad -0.5\leq t\leq0, \\ y_{k}(t)=(1, 2)x_{k}(t)+(2, 1)u_{k}(t), \end{cases} $$
(22)

and P-type ILC

$$u_{k+1}(t)=u_{k}(t)+\left ( \textstyle\begin{array}{c} 0.5\\ -0.4 \end{array}\displaystyle \right )e_{k}(t). $$

The original reference trajectory is

$$y_{d}(t)= \textstyle\begin{cases} 2 \sin(4\pi t), &t\in[0,0.25], \\ 2\sin(4\pi t)+8\sin(4\pi t)+10, &t\in[0.25,1]. \end{cases} $$

Set \(t\in[0,1]\), \(\tau=0.5\), \(\varphi(t)=(e^{t}, e^{t})^{T}\). Next, \(n=2\), \(m=1\), \(A=B=E\), E is the identity matrix, \(C=(1, 2)\) and \(D=(2, 1)\). It is not difficult to find that

$$\begin{aligned} B_{1}=e^{-A\tau}B=\left ( \textstyle\begin{array}{c@{\quad}c} 0.6065 &0\\ 0 &0.6065 \end{array}\displaystyle \right ), \end{aligned}$$

and

$$e_{0.5}^{t}= \textstyle\begin{cases} E, &t\in[-0.5, 0], \\ E+B_{1}t,& t\in[0, 0.5], \\ E+B_{1}t+B_{1}^{2}\frac{(t-0.5)^{2}}{2!}, &t\in[0.5, 1]. \end{cases} $$

Next, we set \(P_{o}=(0.5, -0.4)^{T}\). Obviously, \(\rho(E-DP_{o})=0.8<1\). Thus, all conditions of Theorem 3.1 are satisfied. Then \(y_{k}(t)\) uniformly converges to \(y_{d}(t)\), for \(t\in[0,1]\).

The upper figure of Figure 3 shows the equation (22) output \(y_{k}\) of the 10th iterations and the reference trajectory \(y_{d}\). The lower figure of Figure 3 shows the \(L^{2}\)-norm of the tracking error (see also Table 3) in each iteration.

Figure 3
figure 3

The system output and the tracking error for ( 22 ).

Table 3 The tracking error of each iteration for ( 22 )

Example 5.4

For system

$$\begin{aligned} \textstyle\begin{cases} \dot{x}_{k}(t)=x_{k}(t)+x_{k}(t-0.5)+u_{k}(t),\quad x_{k}(t),u_{k}(t)\in \mathbb {R}^{2}, t\geq0, \\ x_{k}(t)=(t, t)^{T}, \quad-0.5\leq t\leq0, \\ y_{k}(t)=(1, 2)x_{k}(t), \end{cases}\displaystyle \end{aligned}$$
(23)

we take the D-type ILC

$$u_{k+1}(t)=u_{k}(t)+\left ( \textstyle\begin{array}{c} 1\\ -0.4 \end{array}\displaystyle \right )\dot{e}_{k}(t). $$

The original reference trajectory is the same as for Example 5.3. Obviously, all conditions of Theorem 4.1 are satisfied. Then \(y_{k}(t)\) uniformly converges to \(y_{d}(t)\), for \(t\in[0,1]\).

The upper figure of Figure 4 shows the output \(y_{k}\) of equation (23) of the 20th iterations and the reference trajectory \(y_{d}\). The lower figure of Figure 4 shows the \(L^{2}\)-norm of the tracking error (see also Table 4) in each iteration.

Figure 4
figure 4

The system output and the tracking error for ( 23 ).

Table 4 The tracking error of each iteration for ( 23 )