1 Introduction

In 1984, Arimoto et al. [1] propose basic theories and algorithms on iterative learning control (ILC) and point out that ILC is a useful practical control approach for systems which perform tasks repetitively over a finite time interval. The performance improvement can be made step by step, and the output trajectory can be realized to track the desired one, through updating the input signal by the error data. Through three decades of study and development, one has achieved significant progress in both theories and applications, and become one of the most active fields in intelligent control. For more details on the contributions for linear and nonlinear ordinary differential equations, the reader is referred to the monographs [26], and [716].

The issue on designing and analyzing an ILC for impulsive differential equations, discontinuous systems [17, 18], distributed parameter systems or PDEs has not been fully investigated, and only a limited number of results [1922] are available so far. In [19], Liu et al. explore P-type iterative learning control law with initial state learning for impulsive ordinary differential equations to tracking the discontinuous output desired trajectory. In [20], Xu et al. study P-type and D-type ILC for linear first-order distributed parameter systems in the sense of the sup-norm. In [21], Huang and Xu apply a P-type steady-state ILC scheme to the boundary control of PDEs. In [22], Huang et al. construct a uniform design and analysis framework for iterative learning control of linear inhomogeneous distributed parameter systems.

When dealing with impulsive distributed parameter systems, the ILC design and property analysis become far more challenging. The existing ILC design and analysis should be improved. The main objective of this paper is extended [20] to a study of the ILC for impulsive nonlinear first-order distributed parameter systems with initial error in the sense of the λ-norm (the symbol of λ-norm is introduced by Arimoto et al. [1]; cf. [23] and [24]) via semigroup theory.

This paper is a continuation of our recent related papers [19, 20]. The main contributions of the paper are summarized as follows.

(i) A uniform design and analysis framework is presented for ILC of a class of impulsive first-order distributed parameter systems in the time domain. Nevertheless, [19] consider the ILC of impulsive ordinary differential equations in finite dimensional spaces.

(ii) Instead of considering ILC of linear first-order distributed parameter systems without initial error as in [20], we consider a class of impulsive nonlinear first-order distributed parameter systems with initial error and more general discontinuous output tracking problem.

(iii) Instead of simplifying ILC updating law without initial state learning as in [20], we consider ILC updating law with initial state learning.

(iv) Instead of choosing the sup-norm as in [20], we use the \(L^{p}\)-norm and λ-norm, respectively.

2 System description and problem statement

Denote \(J:=[0,a]\) and let X, U and Y be three Hilbert spaces. We study ILC of the following impulsive nonlinear first-order distributed parameter systems:

$$ \left \{ \textstyle\begin{array}{@{}l} \dot{x}_{k}(t)=Ax_{k}(t)+f(t,x_{k}(t))+Bu_{k}(t),\quad t\in J\backslash\mathbb {D}, k\in\mathbb{N}, \\ x_{k}(t^{+}_{i})=x_{k}(t^{-}_{i})+g_{i}(x_{k}(t_{i})),\quad i\in \mathbb{M}, \end{array}\displaystyle \right . $$
(1)

and output equation

$$ y_{k}(t)=C x_{k}(t)+D u_{k}(t), \quad t\in J, $$
(2)

where k denotes the iterative times, \(x_{k}\) is the state variable at the kth iteration, \(u_{k}\) is the control input at the kth iteration, \(y_{k}\) is the system output at the kth iteration, \(\mathbb{D}:=\{ t_{1},t_{2},\ldots,t_{m}\}\), \(\mathbb{M}:=\{1,2,\ldots,m\}\), \(x_{k}: [0,a]\to X\), \(u_{k}: [0,a]\to U\), \(y_{k}: [0,a]\to Y\). The linear unbounded operator A is the infinitesimal generator of a \(C_{0}\)-semigroup \(T(t)\), \(t\geq0\) in X, B is a bounded linear operator from U to X, i.e., \(B\in L(U,X)\), C is a bounded linear operator from X to Y, i.e., \(C\in L(X,Y)\), and D is a bounded linear operator from U to Y, i.e., \(D\in L(U,Y)\). The nonlinear terms \(f:J\times X\rightarrow X\) and \(g_{i}:X\to X\) will be specified later. We have the impulsive time sequences \(\{t_{i}\}_{i\in\mathbb{M}}\) satisfying \(0=t_{0}< t_{1}< t_{2}<\cdots <t_{m}<t_{m+1}=a\). The jumps \(x_{k}(t^{-}_{i}):=\lim_{\varepsilon\rightarrow0^{-}} x_{k}(t_{i}+\varepsilon)\) and \(x_{k}(t^{+}_{i}):=\lim_{\varepsilon\rightarrow0^{+}} x_{k}(t_{i}+\varepsilon)\) represent the left and right limits of \(x_{k}(t)\) at \(t=t_{i}\), respectively.

Denote \(PC(J,X)\) := {\(x : J\rightarrow X : x\), continuous at \(t\in J\backslash\mathbb{D}\), and x is continuous from the left and has right hand limits at \(t\in\mathbb{D}\)} endowed with the λ-norm \(\|x\|_{\lambda}=\sup_{t \in J}e^{-\lambda t}\|x(t)\| _{X}\) for some \(\lambda>0\). Define \(L^{p}(J,X):= \{x: J\rightarrow X \mbox{ is strongly measurable}: \int_{0}^{a}\|x(s)\|_{X}^{p}\,ds<\infty \}\), endowed with the norm \(\|x\|_{L^{p}}= (\int_{0}^{a}\|x(t)\|_{X}^{p}\,dt )^{\frac{1}{p}}\), \(p\in(1,\infty)\). Obviously, \((PC(J,X),\|\cdot\|_{\lambda})\) and \((L^{p}(J,X),\|\cdot\|_{L^{p}})\), \(p\in(1,\infty)\) are Banach spaces.

By a PC-mild solution of (1) with initial value \(x(0)=x_{0}\in X\), we mean the function \(x_{k}\in PC(J,X)\) can be rewritten as the following expression [25]:

$$\begin{aligned} x_{k}(t) =&T(t)x_{0}+\int_{0}^{t}T(t-s) \bigl[f\bigl(s,x_{k}(s)\bigr)+Bu_{k}(s)\bigr]\,ds \\ &{}+\sum _{0< t_{j}< t}T(t-t_{j})g_{j} \bigl(x_{k}(t_{j})\bigr),\quad t\in J. \end{aligned}$$
(3)

By adopting the same methods as in [25], Theorem 2.1, one can obtain the existence and uniqueness of a mild solution of (1) with \(x(0)=x_{0}\) when f and \(g_{i}\) satisfy the standard Lipschitz conditions.

Submitting (3) into (2), we have

$$\begin{aligned} y_{k}(t) =&CT(t)x_{0}+\int_{0}^{t}CT(t-s) \bigl[f\bigl(s,x_{k}(s)\bigr)+Bu_{k}(s)\bigr]\,ds \\ &{}+\sum_{0< t_{j}< t}CT(t-t_{j})g_{j} \bigl(x_{k}(t_{j})\bigr)+D u_{k}(t),\quad t\in J. \end{aligned}$$

Let \(y_{d}(\cdot)\) be the desired trajectory. Denote \(\Delta u_{k}:=u_{k+1}(t)-u_{k}(t)\), \(\Delta x_{k}:=x_{k+1}(t)-x_{k}(t)\), and \(e_{k}(t):=y_{d}(t)-y_{k}(t)\) where k represents the iteration index. Consider the open-loop P-type ILC updating law with initial state learning:

$$ \left \{ \textstyle\begin{array}{@{}l} x_{k+1}(0)=x_{k}(0)+L_{1}e_{k}(0),\\ u_{k+1}(t)=u_{k}(t)+\gamma_{1} e_{k}(t), \end{array}\displaystyle \right . $$
(4)

and the open-loop D-type ILC updating law with initial state learning:

$$ \left \{ \textstyle\begin{array}{@{}l} x_{k+1}(0)=x_{k}(0)+L_{2}e_{k}(0),\\ u_{k+1}(t)=u_{k}(t)+\gamma_{2} \dot{e}_{k}(t), \end{array}\displaystyle \right . $$
(5)

where \(L_{1},L_{2}\in L(Y,X)\) and \(\gamma_{1},\gamma_{2}\in L(Y,U)\) are unknown parameters to be determined.

Concerning the system (1), we will design P-type and D-type iterative learning schemes to generate the control input \(u_{k}(\cdot)\) such that the system piecewise continuous output \(y_{k}(\cdot)\) tracks the discontinuous desired output trajectory \(y_{d}(\cdot)\) as accurately as possible as \(k\to\infty\) for \(t\in J\) in the sense of suitable norms. We shall give two convergence results for open-loop iterative learning schemes in the sense of the \(L^{p}\)-norm and λ-norm, respectively, in the next sections.

3 Convergence results for P-type ILC updating law

We need the following assumptions:

(H0):

A: \(D(A)\subseteq X \to X\) is the generator of a \(C_{0}\)-semigroup \(T(t)\), \(t\geq0\) on X. Denote \(M:=\sup_{t\in J}\|T(t)\|_{L(X,X)}\).

(H1):

\(f:J\times X\rightarrow X\) is strongly measurable for the first variable and continuous for the second variable. Moreover, there exists a \(L_{f}>0\) such that

$$\bigl\| f(t,u)-f(t,v)\bigr\| _{X}\leq L_{f}\|u-v\|_{X}, \quad u,v\in X, t\in J. $$
(H2):

There exists a \(L_{g}>0\) such that

$$\bigl\| g_{i}(u)-g_{i}(v)\bigr\| _{X}\leq L_{g} \|u-v\|_{X},\quad u,v\in X, t\in J, i\in \mathbb{M}. $$
(H3):

Let \(I_{Y}\) be an identity operator in Y and \(I_{Y}-D\gamma_{1}-CL_{1}\in L(Y,Y)\) satisfy

$$ \rho:=\|I_{Y}-D\gamma_{1}-CL_{1} \|_{L(Y,Y)}< 1. $$
(6)

Now we are ready to give the first convergence result in the sense of the \(L^{p}\)-norm.

Theorem 3.1

Assume that (H0)-(H3) hold. If

$$ \|I_{Y}-D\gamma_{1}\|_{L(Y,Y)} \sqrt[q]{a}+\|C\|_{L(X,Y)}M\|B\|_{L(U,X)}\| \gamma_{1} \|_{L(Y,U)}\sqrt[q]{a^{2}}(1+ML_{g})^{m}e^{ML_{f}a}< 1, $$
(7)

then for arbitrary initial input \(u_{0}\), (4) guarantees that \(y_{k}\) tends to \(y_{d}\in L^{p}(J,Y)\) as \(k\to\infty\) in the sense of the \(L^{p}\)-norm where \(1< p,q<\infty\) and \(\frac{1}{p}+\frac{1}{q}=1\).

Proof

Linking (4) and (2), we have

$$\begin{aligned} e_{k+1}(t)=y_{d}(t)-y_{k+1}(t) =(I_{Y}-D\gamma_{1})e_{k}(t)-C\Delta x_{k}(t). \end{aligned}$$
(8)

In what follows, we prove \(\|e_{k+1}\|_{L^{p}}\to0\) as \(k\to\infty\).

Step 1. We prove that \(\|e_{k+1}(0)\|_{Y}\to0\) as \(k\to\infty\).

In fact, for \(t=0\), by using (8) we have

$$\begin{aligned} e_{k+1}(0)=(I-D\gamma_{1})e_{k}(0)-CL_{1} \Delta e_{k}(0). \end{aligned}$$
(9)

Substituting (4) into (9) and taking the Y-norm, we have

$$\begin{aligned} \bigl\| e_{k+1}(0)\bigr\| _{Y}=\|I_{Y}-D\gamma_{1}-CL_{1} \|_{L(Y,Y)}\bigl\| e_{k}(0)\bigr\| _{Y}:=\rho \bigl\| e_{k}(0)\bigr\| _{Y}, \end{aligned}$$

which implies that

$$ \bigl\| e_{k+1}(0)\bigr\| _{Y}\leq\rho^{k} \bigl\| e_{1}(0)\bigr\| _{Y}. $$
(10)

Linking (6) and (10), we conclude that

$$ \lim_{k\rightarrow\infty}\bigl\| e_{k+1}(0) \bigr\| _{Y}=0. $$
(11)

Step 2. For any \(t\in(t_{i}, t_{i+1}]\), \(i=0,1,\ldots,m\), we have

$$\begin{aligned} &\bigl\| \Delta x_{k}(t)\bigr\| _{X} \\ &\quad=\bigl\| x_{k+1}(t)-x_{k}(t)\bigr\| _{X} \\ &\quad\leq M\bigl\| \Delta x_{k}(0)\bigr\| _{X}+ML_{f}\int _{0}^{t}\bigl[\bigl\| \Delta x_{k}(s)\bigr\| _{X}+\|B\|_{L(U,X)}\bigl\| \Delta u_{k}(s)\bigr\| _{U} \bigr]\,ds \\ &\qquad{}+ML_{g}\sum_{0< t_{j}< t}\bigl\| \Delta x_{k}(t_{j})\bigr\| _{X} \\ &\quad\leq M\|L_{1}\|_{L(Y,X)}\bigl\| e_{k}(0) \bigr\| _{Y}+ML_{f}\int_{0}^{t}\bigl\| \Delta x_{k}(s)\bigr\| _{X}\,ds \\ &\qquad{}+M\|B\|_{L(U,X)}\|\gamma_{1}\|_{L(Y,U)}\int _{0}^{t}\bigl\| e_{k}(s)\bigr\| _{Y} \,ds+ML_{g}\sum_{0< t_{j}< t}\bigl\| \Delta x_{k}(t_{j})\bigr\| _{X}. \end{aligned}$$

Using the impulsive Gronwall inequality (see [26]) and the Hölder inequality, we have

$$\begin{aligned} &\bigl\| \Delta x_{k}(t)\bigr\| _{X} \\ &\quad\leq \biggl(M\|L_{1}\|_{L(Y,X)}\bigl\| e_{k}(0) \bigr\| _{Y}+M\|B\|_{L(U,X)}\|\gamma_{1}\| _{L(Y,U)} \int_{0}^{t}\bigl\| e_{k}(s)\bigr\| _{Y} \,ds \biggr) (1+ML_{g})^{m}e^{ML_{f}a} \\ &\quad\leq \bigl(M\|L_{1}\|_{L(Y,X)}\bigl\| e_{k}(0) \bigr\| _{Y}+M\|B\|_{L(U,X)}\|\gamma_{1}\| _{L(Y,U)} \sqrt[q]{a}\|e_{k}\|_{L^{p}} \bigr) (1+ML_{g})^{m}e^{ML_{f}a}, \end{aligned}$$
(12)

where \(\frac{1}{p}+\frac{1}{q}=1\) and \(p,q>1\).

Taking the Y-norm for (8) and substituting (12) into it, we have

$$\begin{aligned} &\bigl\| e_{k+1}(t)\bigr\| _{Y} \\ &\quad\leq \|I_{Y}-D\gamma_{1}\|_{L(Y,Y)} \bigl\| e_{k}(t)\bigr\| _{Y}+\|C\|_{L(X,Y)} \bigl(M \|L_{1}\|_{L(Y,X)}\bigl\| e_{k}(0)\bigr\| _{Y} \\ &\qquad{}+M\|B\|_{L(U,X)}\|\gamma_{1}\|_{L(Y,U)} \sqrt[q]{a}\|e_{k}\|_{L^{p}} \bigr) (1+ML_{g})^{m}e^{ML_{f}a} \\ &\quad\leq \|I_{Y}-D\gamma_{1}\|_{L(Y,Y)} \bigl\| e_{k}(t)\bigr\| _{Y} +\|C\|_{L(X,Y)}M\|L_{1}\| _{L(Y,X)}(1+ML_{g})^{m}e^{ML_{f}a} \bigl\| e_{k}(0)\bigr\| _{Y} \\ &\qquad{}+\|C\|_{L(X,Y)}M\|B\|_{L(U,X)}\|\gamma_{1} \|_{L(Y,U)}\sqrt[q]{a}\|e_{k}\| _{L^{p}}(1+ML_{g})^{m}e^{ML_{f}a}. \end{aligned}$$

For the above inequality, one can take the \(L^{p}\)-norm to derive that

$$\begin{aligned} &\|e_{k+1}\|_{L^{p}} \\ &\quad\leq \|C\|_{L(X,Y)}M\|L_{1}\|_{L(Y,X)}(1+ML_{g})^{m}e^{ML_{f}a} \sqrt[q]{a}\bigl\| e_{k}(0)\bigr\| _{Y} \\ &\qquad{}+\bigl(\|I_{Y}-D\gamma_{1}\|_{L(Y,Y)} \sqrt[q]{a}+\|C\|_{L(X,Y)}M\|B\|_{L(U,X)}\| \gamma_{1} \|_{L(Y,U)}\sqrt[q]{a^{2}}(1+ML_{g})^{m}e^{ML_{f}a} \bigr)\\ &\qquad{}\times\|e_{k}\|_{L^{p}}. \end{aligned}$$

Finally, one can use (7) and (11) to derive that

$$\lim_{k\rightarrow\infty}\|e_{k+1}\|_{L^{p}}=0. $$

The proof is completed. □

Remark 3.2

The condition (7) in Theorem 3.1 seems to be a bit strong since we choose the \(L^{p}\)-norm. However, one can choose another suitable norm, the λ-norm, to weaken this condition.

Next we give the second convergence result in the sense of the λ-norm.

Theorem 3.3

Assume that (H0)-(H3) hold and

$$ \widetilde{\rho}:=\|I_{Y}-D\gamma_{1} \|_{L(Y,Y)}< 1. $$
(13)

For arbitrary initial input \(u_{0}\), (4) guarantees that \(y_{k}\) tends to \(y_{d}\in PC(J,Y)\) as \(k\to\infty\) in the sense of the λ-norm for a sufficiently large \(\lambda>0\).

Proof

By our assumptions and Theorem 3.1, we know (11) holds. Next, we only need to prove \(\|e_{k+1}\|_{\lambda}\to0\) as \(k\to\infty\).

Note that

$$e^{-\lambda t}\int_{0}^{t} \bigl\| e_{k}(s) \bigr\| _{Y}\,ds\leq \frac{1}{\lambda}\|e_{k}\|_{\lambda}. $$

Then (12) turns to

$$\begin{aligned} \bigl\| \Delta x_{k}(t)\bigr\| _{X} \leq& \biggl(M \|L_{1}\|_{L(Y,X)}\bigl\| e_{k}(0)\bigr\| _{Y}+ \frac{e^{\lambda t}M\|B\|_{L(U,X)}\|\gamma_{1}\|_{L(Y,U)}}{\lambda}\|e_{k}\|_{\lambda} \biggr) \\ &{}\times(1+ML_{g})^{m}e^{ML_{f}a}. \end{aligned}$$
(14)

Substituting (14) into (8) again, we have

$$\begin{aligned} \bigl\| e_{k+1}(t)\bigr\| _{Y} \leq& \|I_{Y}-D \gamma_{1}\|_{L(Y,Y)}\bigl\| e_{k}(t)\bigr\| _{Y}+\|C \|_{L(X,Y)}\biggl(M\|L_{1}\| _{L(Y,X)}\bigl\| e_{k}(0) \bigr\| _{Y} \\ &{}+\frac{e^{\lambda t}M\|B\|_{L(U,X)}\|\gamma_{1}\|_{L(Y,U)}}{\lambda}\|e_{k}\|_{\lambda }\biggr) (1+ML_{g})^{m}e^{ML_{f}a}. \end{aligned}$$

For the above inequality, one can take the λ-norm to derive that

$$\begin{aligned} \|e_{k+1}\|_{\lambda} \leq& \|C\|_{L(X,Y)}M \|L_{1}\|_{L(Y,X)}(1+ML_{g})^{m}e^{ML_{f}a}) \bigl\| e_{k}(0)\bigr\| _{Y} \\ &{}+ \biggl(\|I_{Y}-D\gamma_{1}\|_{L(Y,Y)}+ \frac{M\|B\|_{L(U,X)}\|\gamma_{1}\| _{L(Y,U)}(1+ML_{g})^{m}e^{ML_{f}a}}{\lambda} \biggr)\|e_{k}\|_{\lambda}. \end{aligned}$$

Then for some \(\lambda>0\) large enough and linking (13), we obtain

$$\|e_{k+1}\|_{\lambda} \leq \widetilde{\rho}\|e_{k} \|_{\lambda}, $$

which gives

$$\lim_{k\rightarrow\infty}\|e_{k+1}\|_{\lambda}=0. $$

The proof is completed. □

Remark 3.4

One can use the same method in Theorem 3.3 to weaken assumption (7) in Theorem 3.1 if one replaces the standard \(L^{p}\)-norm with an exponentially weighted term \(e^{-\lambda t}\) for some sufficiently large λ.

4 Convergence results for D-type ILC updating law

In this section we assume that \(f(t,x_{k})=x_{k}\), \(g_{i}(x_{k})=x_{k}\) in (1), and \(D=0\) in (2). Moreover, we need the assumption below.

(H4):

\(T(t)\) is differentiable for \(t> 0\). Then by [27], Lemma 4.2, \(\frac{d}{dt}T(t)=AT(t)\) is a bounded linear operator, i.e., \(AT(t)\in L(X,X)\).

Theorem 4.1

Assume that (H0)-(H4) hold and

$$\delta:=\bigl\| I_{Y}-CT(0)B\gamma_{2}\bigr\| _{L(Y,Y)}< 1. $$

For arbitrary initial input \(u_{0}\), (5) guarantees that \(y_{k}\) tends to \(y_{d}\in PC(J,Y)\) as \(k\to\infty\) in the sense of the λ-norm for a sufficiently large \(\lambda>0\).

Proof

In order to prove \(\|e_{k+1}\|_{\lambda}\to0\) as \(k\to\infty\), we divide our proof into two steps.

Step 1. We first compute the time derivative of the tracking error at each iteration. In fact, linking (5) and (2), we have

$$\begin{aligned} \dot{e}_{k+1}(t) =&\dot{y}_{d}(t)-\dot{y}_{k+1}(t) \\ =&\dot{y}_{d}(t)-C\frac{d}{dt} \biggl[T(t)x_{0}+\int _{0}^{t}T(t-s)\bigl[x_{k+1}(s)+Bu_{k+1}(s) \bigr]\,ds \\ &{}+\sum_{0< t_{j}< t}T(t-t_{j})x_{k+1}(t_{j}) \biggr] \\ =&\dot{y}_{d}(t)-C\frac{d}{dt}T(t)x_{0}-CT(0)x_{k+1}(t)-C \int_{0}^{t}\frac{d}{dt}T(t-s)x_{k+1}(s) \,ds \\ &{} -CT(0)Bu_{k+1}-C\int_{0}^{t} \frac{d}{dt}T(t-s)Bu_{k+1}(s)\,ds \\ &{}-C\frac{d}{dt}\sum_{0< t_{j}< t}T(t-t_{j})x_{k+1}(t_{j}) \\ =&\dot{y}_{d}(t)-C\frac{d}{dt}T(t)x_{0}-CT(0) x_{k}(t)-CT(0)\Delta x_{k+1}(t) \\ &{}-C\int_{0}^{t}\frac{d}{dt}T(t-s)x_{k}(s) \,ds-C\int_{0}^{t}\frac{d}{dt}T(t-s)\Delta x_{k+1}(s)\,ds \\ &{}-CT(0)Bu_{k}(t)-CT(0)B\gamma_{2}\dot{e}_{k}(t) \\ &{}-C\int_{0}^{t}\frac {d}{dt}T(t-s)Bu_{k}(s) \,ds-C\int_{0}^{t}\frac{d}{dt}T(t-s)B \gamma_{2}\dot {e}_{k}(s)\,ds \\ &{}-C\frac{d}{dt}\sum_{0< t_{j}< t}T(t-t_{j}) \Delta x_{k+1}(t_{j})-C\frac {d}{dt}\sum _{0< t_{j}< t}T(t-t_{j})x_{k}(t_{j}) \\ =&\dot{y}_{d}(t)-C\frac{d}{dt}T(t)x_{0}-CT(0) x_{k}(t) -C\int_{0}^{t} \frac{d}{dt}T(t-s)x_{k}(s)\,ds \\ &{}-CT(0)Bu_{k}(t)-C\int_{0}^{t} \frac{d}{dt}T(t-s)Bu_{k}(s)\,ds-C\frac {d}{dt}\sum _{0< t_{j}< t}T(t-t_{j})x_{k}(t_{j}) \\ &{}-CT(0)B\gamma_{2}\dot{e}_{k}(t)-C\int _{0}^{t}\frac{d}{dt}T(t-s)B \gamma_{2}\dot {e}_{k}(s)\,ds \\ &{}-C\frac{d}{dt}\sum_{0< t_{j}< t}T(t-t_{j}) \Delta x_{k+1}(t_{j}) \\ &{}-CT(0)\Delta x_{k+1}(t)-C\int_{0}^{t} \frac{d}{dt}T(t-s)\Delta x_{k+1}(s)\,ds \\ =&\bigl(I_{Y}-CT(0)B\gamma_{2}\bigr)\dot{e}_{k}(t)-C \int_{0}^{t}AT(t-s)B\gamma_{2}\dot {e}_{k}(s)\,ds \\ &{}-CT(0)\Delta x_{k+1}(t)-C\int_{0}^{t}AT(t-s) \Delta x_{k+1}(s)\,ds \\ &{}-C\sum_{0< t_{j}< t}AT(t-t_{j})\Delta x_{k+1}(t_{j}). \end{aligned}$$

This yields

$$\begin{aligned} &\bigl\| \dot{e}_{k+1}(t)\bigr\| _{Y} \\ &\quad\leq\bigl\| I_{Y}-CT(0)B\gamma_{2}\bigr\| _{L(Y,Y)}\bigl\| \dot{e}_{k}(t)\bigr\| _{Y} \\ &\qquad{}+\|C\|_{L(X,Y)}\max_{t\in[0,a]}\bigl\| AT(t) \bigr\| _{L(X,X)}\|B\|_{L(U,X)}\|\gamma _{2}\|_{L(Y,U)}\int _{0}^{t}\bigl\| \dot{e}_{k}(s) \bigr\| _{Y}\,ds \\ &\qquad{}+M\|C\|_{L(X,Y)}\bigl\| \Delta x_{k+1}(t)\bigr\| _{X}+\|C \|_{L(X,Y)}\max_{t\in[0,a]}\bigl\| AT(t)\bigr\| _{L(X,X)}\int _{0}^{t}\bigl\| \Delta x_{k+1}(s)\bigr\| \,ds \\ &\qquad{}+\|C\|_{L(X,Y)}\max_{t\in[a-t_{m},a-t_{1}]}\bigl\| AT(t)\bigr\| _{L(X,X)}\sum_{0< t_{j}< t}\bigl\| \Delta x_{k+1}(t_{j})\bigr\| . \end{aligned}$$

Taking the λ-norm, we get

$$\begin{aligned} \|\dot{e}_{k+1}\|_{\lambda} \leq&\delta\|\dot{e}_{k} \|_{\lambda} \\ &{}+\frac{\|C\|_{L(X,Y)}\max_{t\in[0,a]}\|AT(t)\|_{L(X,X)}\|B\|_{L(U,X)}\| \gamma_{2}\|_{L(Y,U)}}{\lambda}\|\dot{e}_{k}\|_{\lambda}:=I_{1} \\ &{}+M\|C\|_{L(X,Y)}\|\Delta x_{k+1}\|_{\lambda} \\ &{}+\frac{\|C\|_{L(X,Y)}\max_{t\in[0,a]}\|AT(t)\|_{L(X,X)}}{\lambda}\| \Delta x_{k+1}\|_{\lambda}:=I_{2} \\ &{}+\sum_{j=1}^{m}\frac{\|C\|_{L(X,Y)}\max_{t\in[a-t_{m},a-t_{1}]}\|AT(t)\| _{L(X,X)}}{e^{\lambda(t-t_{j})}}\| \Delta x_{k+1}\|_{\lambda}:=I_{3}. \end{aligned}$$

Obviously, the terms \(I_{i}\), \(i=1,2,3\), tend to zero if we choose \(\lambda>0\) large enough.

Concerning (14) and changing k to \(k+1\), we take the λ-norm,

$$\begin{aligned} \|\Delta x_{k+1}\|_{\lambda} \leq&M \|L_{1}\|_{L(Y,X)}(1+ML_{g})^{m}e^{ML_{f}a} \bigl\| e_{k+1}(0)\bigr\| _{Y}:=I_{4} \\ &{}+\frac{M\|B\|_{L(U,X)}\|\gamma_{1}\| _{L(Y,U)}(1+ML_{g})^{m}e^{ML_{f}a}}{\lambda}\|e_{k+1}\|_{\lambda}:=I_{5}. \end{aligned}$$

Using (11) and taking \(\lambda>0\) large enough, we see that \(I_{i}\), \(i=4,5\), tend to zero. This yields \(\|\Delta x_{k+1}\|_{\lambda}\to0\) as \(k\to\infty\).

Next, noting that \(\delta<1\), we obtain

$$\begin{aligned} \lim_{k\to\infty}\|\dot{e}_{k+1} \|_{\lambda}=0. \end{aligned}$$
(15)

Step 2. We compute the tracking error at each iteration,

$$\begin{aligned} \bigl\| e_{k+1}(t)\bigr\| \leq \bigl\| e_{k+1}(0)\bigr\| _{Y}+\int _{0}^{t}\bigl\| \dot{e}_{k+1}(s) \bigr\| _{Y}\,ds. \end{aligned}$$

Taking the λ-norm, we get

$$\begin{aligned} \|e_{k+1}\|_{\lambda}\leq\bigl\| e_{k+1}(0) \bigr\| _{Y}+ \frac{1}{\lambda}\|\dot{e}_{k+1}\|_{\lambda}. \end{aligned}$$
(16)

Linking (11) and (15) and (16) we derive the desired results. □

5 Example

In this section, we consider the following impulsive partial differential equation with Neumann boundary conditions:

$$ \left \{ \textstyle\begin{array}{@{}l} \frac{\partial}{\partial t}x(t,z)=\frac{\partial^{2}}{\partial z^{2}} x(t,z)+l_{1}\cos t\sin x(t,z)+l_{2} u(t,z),\quad z\in(0,1), t\in[0,\frac{1}{3})\cup(\frac{1}{3},1],\\ \frac{\partial}{\partial z} x(t,0)=\frac{\partial}{\partial z} x(t,1)=0,\quad t\in[0,\frac{1}{3})\cup(\frac{1}{3},1],\\ x(\frac{1}{3}^{+},z)-x(\frac{1}{3}^{-},z)=l_{3} x(\frac{1}{3},z),\quad l_{3}\in \mathbb{R}, z\in(0,1), \end{array}\displaystyle \right . $$
(17)

where \(l_{i}\), \(i=1,2,3\in\mathbb{R}^{+}\), and

$$ y(t,z)=c x(t,z)+du(t,z),\quad c\in\mathbb{R}^{+},d\geq0, t\in J, z \in (0,1). $$
(18)

Let \(X=U=Y=L^{2}(0,1)\). Set \(J=[0,1]\), \(m=1\), and \(t_{1}=\frac{1}{3}\). Define \(A: D(A)\subset X\rightarrow X\) by \(Ax=\frac{\partial ^{2}}{\partial z^{2}}x:=x_{zz} \), where \(D(A)=\{x\in H^{2}((0,1)): x_{z}(0)=x_{z}(1)=0\}\). Then A can be written as \(Ax=-\sum_{n=1}^{\infty}n^{2}\langle x,x_{n}\rangle\), \(x\in D(A)\) where \(x_{n}(z) = \sqrt{\frac{2}{\pi}}\cos n\pi z\), \(n = 1,2,\ldots \) . Next, A generates a \(C_{0}\)-semigroup \(T(t)\), \(t\geq0\) written as \(T(t)x:=\sum_{n=1}^{\infty}e^{- n^{2}t}\langle x,x_{n}\rangle x_{n}\), with \(\|T(t)\|_{L(X,X)}\leq e^{-t}\leq1=M\). Thus, (H0) holds. Moreover, \(T(t)\) is differentiable for \(t> 0\) and \(\frac{d}{dt}T(t)x=AT(t)x=-\sum_{n=1}^{\infty}\frac{n^{2}}{e^{ n^{2}t}}\langle x,x_{n}\rangle x_{n}\) and \(\|AT(t)\|_{L(X,X)}\leq 1\). Thus, (H4) holds.

Denote \(x(\cdot)(z)=x(\cdot,z)\), \(f(\cdot,x)(z)=l_{1}\cos\cdot\sin x(\cdot,z)\), \(Bu(\cdot)(z)=l_{2}u(\cdot,z)\), \(g_{1}(x(t_{1}^{-}))(z)=l_{3} x(t_{1}^{-},z)\), then (17) can be abstracted to (1). Thus, (H1) and (H2) hold.

Denote \(y(\cdot)(z)=y(\cdot,z)\) and \(C=cI_{Y}\) and \(D=dI_{Y}\), then (18) can be rewritten as (2). Thus, \((1-c)I_{Y}\in L(Y,Y)\), i.e., (H3) holds.

We consider (4) and (5) where \(L_{1}=L_{2}=I_{X}\in L(Y,X)\) and \(\gamma_{1}=\gamma_{2}=I_{U}\in L(Y,U)\). Then we have the following conclusions:

  • Choosing \(d=1\) and c satisfying \(c<\frac{1}{l_{2}(1+l_{3})e^{l_{1}}}\) implies (7) holds. By Theorem 3.1, \(y_{k}\) tends to \(y_{d}\) as \(k\to\infty\) in the sense of the \(L^{p}\)-norm.

  • Set \(1>d>0\). Then \(1-d>0\), which implies (13) holds. By Theorem 3.3, \(y_{k}\) tends to \(y_{d}\) as \(k\to\infty\) in the sense of the λ-norm (λ must be a sufficiently large).

  • Set \(d=0\). Clearly, \(\delta=1-cl_{2}<1\) since \(cl_{2}>0\). By Theorem 4.1, \(y_{k}\) tends to \(y_{d}\) as \(k\to\infty\) in the sense of the λ-norm (λ must be a sufficiently large).