In this section we consider continuous-time Lur’e inclusions, treating the unforced and forced cases in separate subsections. We first collect some terminology and results which are used in both subsections.
Given a set-valued map \(H : \mathbb R_+ \times \mathbb R^n \rightarrow P_0(\mathbb R^n)\) and \(x^0 \in \mathbb R^n\), we say that an absolutely continuous function \(x : [0, \omega ) \rightarrow \mathbb R^n\) for \(0< \omega \le \infty \), is a solution of the initial value problem
$$\begin{aligned} \dot{x}(t) \in H(t,x(t)), \quad x(0) = x^0, \end{aligned}$$
(2.1)
if (2.1) holds for almost all \(t \in [0, \omega )\). If \(\omega =\infty \), then x is said to be a global solution. If \(x: [0, \omega ) \rightarrow \mathbb R^n\) is a solution of (2.1) for some \(0<\omega \le \infty \), then \(\dot{x}\) is a locally integrable selection of \(t \mapsto H(t, x(t))\). Therefore, for all \(0\le t_1 \le t_2 < \omega \), the set
$$\begin{aligned}&\int _{t_1}^{t_2} H(s,x(s)) \,ds\\&\quad := \left\{ \int _{t_1}^{t_2} h(s) \, ds \, \Big \vert \, h \in L^1([t_1,t_2];\mathbb R^n)\text { is a selection of }t \mapsto H(t, x(t)) \right\} , \end{aligned}$$
is non-empty and, with this notation, x satisfies the integral inclusion
$$\begin{aligned} x(t_2) - x(t_1) \in \int _{t_1}^{t_2} H(s,x(s)) \, ds. \end{aligned}$$
(2.2)
We shall consider the special case of (2.1) with
$$\begin{aligned} H(t,x) = Ax + BG(t,Cx) + D(t), \end{aligned}$$
(2.3)
and
$$\begin{aligned} (A,B,C) \in \mathbb R^{n \times n} \times \mathbb R^{n\times m} \times \mathbb R^{p \times n}, \quad D : \mathbb R_+ \rightarrow P_0\left( \mathbb R^n\right) , \quad \text {and} \quad G : \mathbb R_+ \times \mathbb R^p \rightarrow P_0\left( \mathbb R^m\right) , \end{aligned}$$
(2.4)
for some \(m,n,p \in \mathbb N\). We will impose positivity and stability properties on the data in (2.4) later in the section. Combined, (2.1), (2.3) and (2.4) give rise to the system of Lur’e differential inclusions
$$\begin{aligned} \dot{x}(t) - Ax(t) \in B G(t,Cx(t)) + D(t), \quad x(0) = x^0, \quad t \in \mathbb R_+. \end{aligned}$$
(2.5)
Our primary, but not exclusive, focus is the autonomous case wherein \(G(t,y) = F(y)\), for some \(F:\mathbb R^p \rightarrow P_0(\mathbb R^n)\), so that (2.5) reduces to (1.3). The set-valued forcing term \(D : \mathbb R_+ \rightarrow P_0(\mathbb R^n)\) does not play a role in Sect. 2.1, where (1.1) is considered by taking \(D = \{0\}\). As mentioned in the Introduction, when G is singleton-valued then (2.5) simplifies to a system of Lur’e differential equations; a special case which we consider in Sect. 2.3. We next state and prove a “variation of parameters” expression for solutions of forced Lur’e inclusions. No stability or positivity properties are required for this result.
Lemma 2.1
For model data (2.4) and \(x^0 \in \mathbb R^n\), let \(x: [0, \omega ) \rightarrow \mathbb R^n\) denote a solution of (2.5) for some \(\omega >0\). Then x satisfies the inclusion
$$\begin{aligned} x(t) - \mathrm {e}^{At} x^0 \in \int _0^t \mathrm {e}^{A(t-\tau )}\big [BG(\tau ,Cx(\tau )) +D(\tau ) \big ]\, d\tau \quad \forall \, t \in [0, \omega ). \end{aligned}$$
(2.6)
Proof
Let \(0< t <\omega \) be fixed, but arbitrary, and define an absolutely continuous function z on [0, t] by \(z(\tau ) := \mathrm {e}^{A(t-\tau )} x(\tau )\). Obviously, \(z(0) = \mathrm {e}^{At}x^0\) and \(z(t) = x(t)\), and a routine calculation shows that z satisfies the differential inclusion
$$\begin{aligned} \frac{\mathrm{d}}{\mathrm{d}\tau } z(\tau ) \in \mathrm {e}^{A(t-\tau )}\big [ BG(\tau , Cx(\tau )) + D(\tau ) \big ] \quad \text {almost all }\tau \in (0,t). \end{aligned}$$
Therefore, in light of (2.2),
$$\begin{aligned} z(t) - z(0) = \int _0^t \frac{\mathrm{d}}{\mathrm{d}\tau } z(\tau ) \, d\tau \in \int _0^t \mathrm {e}^{A(t-\tau )}\big [ BG(\tau , Cx(\tau )) + D(\tau ) \big ] \, d\tau , \end{aligned}$$
which implies that
$$\begin{aligned} x(t) - \mathrm {e}^{At} x^0 \in \int _0^t \mathrm {e}^{A(t-\tau )}\big [ BG(\tau , Cx(\tau )) + D(\tau ) \big ]\, d\tau . \end{aligned}$$
Since \(0<t < \omega \) was arbitrary, we conclude that (2.6) holds. \(\square \)
The following equivalence, known as “loop-shifting” in the control engineering jargon, shall also play a key role, and is easily established. Namely, for fixed \(K\in \mathbb R^{m \times p}\), \(x^0 \in \mathbb R^n\) and model data (2.4), the function x is a solution of (2.5) if, and only if, x is a solution of
$$\begin{aligned} \dot{x}(t) - (A+BKC)x(t) \in B \Big (G(t,Cx(t)) - KCx(t) \Big )+ D(t), \quad x(0) = x^0, \quad t \in \mathbb R_+. \end{aligned}$$
(2.7)
We introduce the positivity and stability properties of A, B, C, D and G as in (2.4):
- (A1):
-
\((A,B,C) \in \mathbb R^{n \times n} \times \mathbb R^{n\times m}_+ \times \mathbb R^{p \times n}_+\), A is Metzler, \(D : \mathbb R_+ \rightarrow P_0(\mathbb R^n_+)\) and \(G : \mathbb R_+ \times \mathbb R^p_+ \rightarrow P_0(\mathbb R^m_+)\);
- (A2):
-
\(\alpha (A)<0\);
- (A3):
-
there exists \(R>0\) such that
$$\begin{aligned} \Vert w \Vert \le R \Vert y\Vert \quad \forall \; w \in G(t,y) \; \; \forall \, t \in \mathbb R_+ \; \; \forall \, y \in \mathbb R^p_+. \end{aligned}$$
We comment that we shall interpret (A1) and (A3) in the autonomous case as well, meaning that \(G(t,y) = F(y)\) for some \(F : \mathbb R^p \rightarrow P_0(\mathbb R^m)\).
We briefly discuss an issue pertaining to domains which often arises when studying positive systems, to reconcile the general case (2.4) to the assumptions made in (A1). Indeed, under (A1), the function G is only defined on \(\mathbb R_+ \times \mathbb R^p_+\). Some existence theory (see, for example [25, Theorem 1, p. 97]) for (2.1) assumes that \(H : I \times \varOmega \rightarrow P_0(X)\) where \(I \subseteq \mathbb R\) and \(\varOmega \subseteq X \subseteq \mathbb R^n\) are open, which clearly fails with \(I = \mathbb R_+\) and \(\varOmega = \mathbb R^n_+\). Therefore, if \(J : \mathbb R_+ \times \mathbb R^n_+ \rightarrow P_0(\mathbb R^n_+)\) denotes the right hand side of the differential inclusion (2.5) under (A1), then we extend J to \(\mathbb R\times \mathbb R^n\), and denote the extension \(J_\mathrm{e}: \mathbb R\times \mathbb R^n \rightarrow P_0(\mathbb R^n_+)\), by setting \( J_\mathrm{e}(t,x) := J(\max \{0,t\},\mu (x))\) with \(\mu : \mathbb R^n \rightarrow \mathbb R^n_+\) defined by
$$\begin{aligned} \mu (x) := \begin{pmatrix} \max \{0, x_1\},&\max \{0, x_2\}&\ldots&\max \{0, x_n\} \end{pmatrix}^T \quad \forall \, x = \begin{pmatrix}x_1&\ldots&x_n \end{pmatrix}^T \in \mathbb R^n. \end{aligned}$$
(2.8)
Since J and \(J_\mathrm{e}\) coincide on \(\mathbb R^n_+\), we may seek solutions of \(\dot{x} \in J_\mathrm{e}(x)\), as an immediate consequence of the positivity assumptions in (A1) is the following.
Lemma 2.2
Under assumption (A1), every solution \(x:[0, \omega ) \rightarrow \mathbb R^n\) of (2.5) with \(x^0 \in \mathbb R^n_+\) takes values in \(\mathbb R^n_+\) for all \(t \in [0, \omega )\).
Proof
The claim follows from the variation of parameters inclusion (2.6) combined with the nonnegativity assumptions in (A1). Crucially, we have used that A is Metzler if, and only if, \(\mathrm{e}^{At} >0\) for all \(t\ge 0\); see, for example [10, Section 3.1].\(\square \)
We conclude this section by commenting that the loop-shifting property, see (2.7), holds without the assumptions (A1)–(A3). It demonstrates that we may replace A and G in (1.3) by \(A+BKC\) and \((t,y) \mapsto G(t,y) - Ky\), respectively. In particular, A may not satisfy assumption (A2), but \(A+BKC\) may do so, for some \(K \in \mathbb R^{p\times m}\). In other words, this means that there is a so-called stabilising static output feedback \(u = Ky\) for the linear system (1.2), which is the approach we take in Example 4.1. Of course, in the current setting of positive Lur’e inclusions, the choice of K shall be constrained by the requirement that \(A+BKC\) and \((t,y) \mapsto G(t,y) - Ky\) satisfy (A1) as well, which may be infeasible in some cases.
Unforced systems
We consider the Lur’e inclusion (1.1) with assumptions (A1)–(A3). Recall from the previous section that this is a special case of (2.5). We note that the assumptions on A, B, C and F do not a priori guarantee that (1.1) admits solutions. Existence of solutions is not the focus of the present investigation, primarily as it is an extensively studied subject in the literature. That said, we do make some comments and provide some references regarding the existence of solutions. Recall the extension \(J_\mathrm{e}\) from Sect. 2, defined in terms of \(\mu \) from (2.8). The following lemma shows that \(J_\mathrm{e}\) inherits many useful properties from J. The proofs are readily established once it is noted that \(\mu \) is Lipschitz with Lipschitz constant equal to one.
Lemma 2.3
Let \(J : \mathbb R_+ \times \mathbb R^n_+ \rightarrow P_0(\mathbb R^n_+)\) and define \(J_\mathrm{e}(t,x) := J(\max \{0,t\}, \mu (x))\) where \(\mu \) is given by (2.8). If J has any of the following properties: bounded, closed valued, convex valued, upper semi-continuous, lower semi-continuous, then \(J_\mathrm{e}\) has the corresponding property.
Examples of results ensuring existence of solutions of (1.1) include [25, Theorem 3, p. 98], [26, Proposition 6.1, p. 53] and [27, Theorem 7.5.1, p. 279]. The results [25, Theorems 1 and 4, p. 97, p. 101], [26, Lemma 5.1, p. 53] and [26, Theorem 6.1, p. 53] provide conditions under which global solutions of (1.1) exist, from which we obtain the following.
Proposition 2.4
Given the Lur’e inclusion (1.1), assume that (A1) and (A3) hold and that F is upper semi-continuous with closed, convex values. Then, for all \(x^0 \in \mathbb R^n_+\), there is a global solution of (1.1), and every solution may be extended to a global solution. Further, every solution x satisfies \(x(t) \in \mathbb R^n_+\) for all t where x(t) is defined.
Proof
The lemma follows from applications of [26, Lemma 5.1, p. 53] and [26, Corollary 5.2, p. 58], which establish existence of solutions, and extension to global solutions, respectively. Lemma 2.2 ensures that every solution is nonnegative. \(\square \)
We shall later briefly consider non-autonomous versions of (1.1), specifically (2.5). In this context [26, Theorem 5.2, p. 58] provides further assumptions on G which, combined with (A1)–(A3), guarantee the existence of global solutions. In fact, under (A1)–(A3), any result guaranteeing existence of local solutions which uses continuous (or Caratheodory) selections of \(x \mapsto Ax + BF(Cx)\) (or \((t,x) \mapsto Ax + BG(t,Cx)\)) will, in fact, ensure existence of global solutions by well-known theory of maximally defined solutions of ordinary differential equations (see the proof of [25, Theorem 1, p. 97]).
The main result of this section is presented next, and contains a suite of stability results for (1.1) formulated in terms of weighted one-norm inequalities and \(\mathbf{G}(0)\), assuming that global solutions exist. Here \(\mathbf{G}\) denotes the transfer function of the triple (A, B, C), that is, \(\mathbf{G}(s) = C(sI-A)^{-1}B\), where s is a complex variable. Recall that a matrix \(A \in \mathbb R^{n \times n}\) is Metzler with \(\alpha (A) <0\) if, and only if, \(-A^{-1} > 0\) (see, for example [5, Characterisation N\({}_{38}\) in Section 6.2] or [66, characterisation F\({}_{15}\)]). Consequently, under assumptions (A1) and (A2), it follows that \(\mathbf{G}(0) = -CA^{-1}B \in \mathbb R^{p \times m}_+\).
Theorem 2.5
Given the Lur’e inclusion (1.1), assume that (A1)–(A3) hold.
-
(i)
If there exists a strictly positive \(v \in \mathbb R^p_+\) such that
$$\begin{aligned} \vert \mathbf{G}(0) w \vert _{v} \le \vert y \vert _{v} \quad \forall \; w \in F(y) \; \; \forall \, y \in \mathbb R^p_+, \end{aligned}$$
(2.9)
then there exists \(\varGamma >0\) such that, for all \(x^0 \in \mathbb R^n_+\), every global solution x of (1.1) satisfies
$$\begin{aligned} \Vert x(t) \Vert \le \varGamma \Vert x^0\Vert \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
-
(ii)
If there exist a strictly positive \(v \in \mathbb R^p_+\) and a lower semi-continuous function \(e : \mathbb R^p_+ \rightarrow \mathbb R_+\) such that
$$\begin{aligned} e(y) \; >0 \quad \text {and} \quad \vert \mathbf{G}(0) w \vert _{v} +e(y) \; \le \vert y \vert _{v} \quad \forall \; w \in F(y) \; \; \forall \, y \in \mathbb R^p_+{\setminus }\{0\}, \end{aligned}$$
(2.10)
then, for all \(x^0 \in \mathbb R^n_+\), every global solution x of (1.1) satisfies \(x(t) \rightarrow 0\) as \(t \rightarrow \infty \).
-
(iii)
If there exist a strictly positive \(v \in \mathbb R^p_+\) and \(\rho \in (0,1)\) such that
$$\begin{aligned} \vert \mathbf{G}(0) w \vert _{v} \le \rho \vert y \vert _{v} \quad \forall \; w \in F(y) \; \; \forall \, y \in \mathbb R^p_+, \end{aligned}$$
(2.11)
then there exist \(\varGamma , \gamma >0\) such that, for all \(x^0 \in \mathbb R^n_+\), every global solution x of (1.1) satisfies
$$\begin{aligned} \Vert x(t) \Vert \le \varGamma \mathrm{e}^{-\gamma t}\Vert x^0\Vert \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
The notion of stability concluded in statement (i) is often called “stability in the large”, see [36, Definition 3], which, when combined with statements (ii) and (iii) yield that the zero equilibrium of (1.1) is globally asymptotically and globally exponentially stable, respectively. Before proving the above theorem, we provide some commentary on assumptions (2.9)–(2.11).
Remark 2.6
The weighted one-norm estimates (2.9)–(2.11) provide conditions on the norm of the “product” \(\mathbf{G}(0)F(y)\) and not on a product of norms. Sufficient conditions for (2.9)–(2.11) are linear constraints which are reminiscent of sector-type conditions in a nonnegative orthant. Namely, if there exists an irreducible matrix \(M \in \mathbb R^{p \times p}_+\) with \(r(M)\le 1\) such that
$$\begin{aligned} \mathbf{G}(0) w \le My \quad \forall \; w \in F(y) \; \; \forall \, y \in \mathbb R^p_+, \end{aligned}$$
(2.12)
then (2.9) holds. To see this, multiply both sides of (2.12) on the left by \(v^T\), a strictly positive left eigenvector of M corresponding to the eigenvalue r(M), the existence of which is ensured by the Perron–Frobenius Theorem (see, for example [5, Theorem 1.4, p. 27]). Further, if there exists an irreducible matrix \(M \in \mathbb R^{p \times p}_+\) with \(r(M)\le 1\) and a lower semi-continuous function \(\zeta : \mathbb R^p_+ \rightarrow \mathbb R_+^p\) such that
$$\begin{aligned} \zeta (y) \; >0 \quad \text {and} \quad \mathbf{G}(0) w + \zeta (y) \; < My \quad \; \forall \, w \in F(y) \; \; \forall \, y \in \mathbb R^p_+{\setminus }\{0\}, \end{aligned}$$
then (2.10) holds with \(e := v^T \zeta \), by the same argument as above. If there exists a nonnegative (not necessarily irreducible) matrix M which satisfies (2.12) with the property that \(r(M) <1\), then (2.11) holds. \(\square \)
Proof of Theorem 2.5
Throughout the proof, we let \(x:\mathbb R_+ \rightarrow \mathbb R^n_+\) denote a global solution of (1.1) for given \(x^0 \in \mathbb R^n_+\).
(i): As \(\mathbf{G}(0) \ge 0\), the condition (2.9) may be rewritten as
$$\begin{aligned} v^T \mathbf{G}(0) F(Cx(t)) \subseteq [0, v^T Cx(t)] \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
(2.13)
Multiplying both sides of (1.1) by \(-v^TCA^{-1}\) and invoking (2.13) yields that
$$\begin{aligned} -v^TCA^{-1}\dot{x}(t)&\in -v^TCA^{-1}\big [Ax(t) + B F(Cx(t))\big ] = -v^TCx(t) +v^T\mathbf{G}(0)F(Cx(t)) \\&\subseteq [-v^TCx(t), 0] \quad \text {for almost all } t \ge 0, \end{aligned}$$
whence
$$\begin{aligned} -v^TCA^{-1}\dot{x}(t) \le 0\quad \text {for almost all } t \ge 0, \end{aligned}$$
(2.14)
and, further,
$$\begin{aligned} 0 \le -v^TC A^{-1} x(t) \le -v^TC^{-1} A x^0 \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
(2.15)
We claim that there exists \(\theta >0\) such that
$$\begin{aligned} \theta z \le -A^{-1} z \quad \forall \, z \in \mathbb R^n_+. \end{aligned}$$
(2.16)
To that end, choose \(\beta >0\) such that \(A+ \beta I \ge 0\), so that in particular
$$\begin{aligned} \mathrm {e}^{(A +\beta I)t} = \sum _{k =0}^\infty \frac{t^k}{k!} (A+\beta I)^k \ge I \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
(2.17)
As A is Metzler with \(\alpha (A)<0\), in light of (2.17) we have, for \(z \in \mathbb R^n_+\),
$$\begin{aligned} -A^{-1} z= \int _0^\infty \mathrm {e}^{At} z\, dt = \int _0^\infty \mathrm {e}^{(A + \beta I)t} \mathrm {e}^{-\beta t} z\, dt \ge \int _0^\infty \mathrm {e}^{-\beta t} z\, dt = \frac{z}{\beta }, \end{aligned}$$
which is (2.16) with \(\theta := 1/\beta >0\). Using (2.15) and (2.16) shows that
which implies, by norm equivalence, that there exists \(P >0\) such that
$$\begin{aligned} \Vert Cx(t) \Vert \le P\Vert x^0\Vert \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
(2.18)
Invoking Lemma 2.1, the variation of parameters inclusion (2.6) with \(G(t,Cx(t)) = F(Cx(t))\) and \(D=\{0\}\), combined with (A3) and (2.18), we may estimate
$$\begin{aligned} \Vert x(t) \Vert \le \left( \Vert \mathrm {e}^{At} \Vert + RP \int _0^t \Vert \mathrm {e}^{A(t-\tau )} B\Vert \, d\tau \right) \Vert x^0 \Vert \quad \forall \, t \in \mathbb R_+, \end{aligned}$$
whence, by (A2), we conclude that statement (i) holds.
(ii): Our hypotheses ensure that statement (i) holds. First, suppose that
$$\begin{aligned} Cx(t) \rightarrow 0 \quad \text {as } t \rightarrow \infty , \end{aligned}$$
(2.19)
and fix \(\varepsilon > 0\). We note that for any \(T_1 \ge 0\), we may use (A2), (A3) and (2.19) to estimate that \(\xi \) satisfying
$$\begin{aligned} \xi (t) \in \int _{T_1}^t \mathrm {e}^{A(t-\tau )}B F(Cx(\tau )) \, d\tau \quad \forall \,t \ge T_1, \end{aligned}$$
admits the estimate
$$\begin{aligned} \Vert \xi (t) \Vert&\le \int _{T_1}^t \Vert \mathrm {e}^{A(t-\tau )} \Vert \Vert B\Vert {\left| \left| \left| F(Cx(\tau )) \right| \right| \right| } \, d\tau \le \Vert B\Vert R \int _{T_1}^t \Vert \mathrm {e}^{A(t-\tau )} \Vert \Vert Cx(\tau ) \Vert \, d\tau \\&\le \frac{\varepsilon }{2}, \end{aligned}$$
for \(T_1 >0\) sufficiently large. Consequently, \(\Vert \xi (t) \Vert \le \varepsilon /2\) for all \(t \ge T_1\). Since x satisfies
$$\begin{aligned} x(t) - \mathrm {e}^{A(t-T_1)} x(T_1) \in \int _{T_1}^t \mathrm {e}^{A(t-\tau )}BF(Cx(\tau )) \, d\tau \quad \forall \, t \ge T_1, \end{aligned}$$
(which follows easily from (2.6)), invoking (A2) gives \(T_2> T_1\) such that for all \(t \ge T_2\)
$$\begin{aligned} \Vert x(t) - \mathrm {e}^{A(t-T_1)} x(T_1)\Vert \le \frac{\varepsilon }{2} \quad \Rightarrow \quad \Vert x(t) \Vert \le \Vert \mathrm {e}^{A(t-T_1)} x(T_1)\Vert + \frac{\varepsilon }{2} < \varepsilon , \end{aligned}$$
hence \(\lim _{t \rightarrow \infty } x(t) =0\) as \(t \rightarrow \infty \), as required, provided that (2.19) holds.
Therefore, it remains to establish (2.19). For which purpose, fix \(x^0 \in \mathbb R^n_+{\setminus }\{0\}\) and, seeking a contradiction, suppose that (2.19) fails. Then there exist a sequence \((t_k)_{k \in \mathbb N} \subseteq \mathbb R_+\) and \(\varepsilon > 0\) such that \(t_k \nearrow \infty \) as \(k \rightarrow \infty \) and
$$\begin{aligned} 2 \varepsilon \le \Vert Cx(t_k) \Vert \quad \forall \, k \in \mathbb N. \end{aligned}$$
By statement (i) we have that x is bounded, and hence from (1.1) and (A3) it follows that \(\dot{x}\) is bounded as well. Hence x is uniformly continuous, and so is Cx. Consequently, there exists \(\delta >0\) such that
$$\begin{aligned} \varepsilon \le \Vert Cx(t) \Vert \le \varGamma \Vert C \Vert \Vert x^0\Vert \quad \forall \, t \in [t_k, t_k + \delta ] \; \; \forall \, k \in \mathbb N, \end{aligned}$$
(2.20)
where we have used the bound for x from statement (i). As \(t_k \nearrow \infty \) as \(k \rightarrow \infty \) we may assume that \(t_{k+1} > t_{k} + \delta \) for all \(k \in \mathbb N\) (by redefining the sequence \((t_k)_{k \in \mathbb N}\) if necessary). Define
$$\begin{aligned} \mathcal {M}:= \big \{ \xi \in \mathbb R^p_+ {:} \, \varepsilon \le \Vert \xi \Vert \le \varGamma \Vert C \Vert \Vert x^0\Vert \big \}, \end{aligned}$$
(2.21)
a compact set which does not contain zero. We claim that there exists \(\eta >0 \) such that
$$\begin{aligned} \inf _{\begin{array}{c} \xi \in \mathcal {M}\\ w \in F(\xi ) \end{array}}\big [v^T \xi - v^T\mathbf{G}(0)w\big ] \ge \eta . \end{aligned}$$
(2.22)
To establish (2.22), note that by (2.10),
$$\begin{aligned} \big [v^T \xi - v^T\mathbf{G}(0)w\big ] \ge e(\xi ) \ge \inf _{\zeta \in \mathcal {M}} e(\zeta ) =: \eta >0, \quad \forall \, w \in F(\xi ),\; \forall \, \xi \in \mathcal {M}, \end{aligned}$$
as e is a lower semi-continuous function which is positive-valued for positive arguments, and \(\mathcal {M}\) is a compact set which does not contain zero.
Consider next the real-valued, nonnegative function
$$\begin{aligned} f : \mathbb R_+ \rightarrow \mathbb R_+, \quad f(t) := -v^TCA^{-1}x(t), \end{aligned}$$
which is absolutely continuous, non-increasing by (2.14) and, furthermore, satisfies
$$\begin{aligned} \dot{f}(t) = -v^TCA^{-1}\dot{x}(t) \in - v^T C x(t) + v^T\mathbf{G}(0)F(Cx(t)) \quad \text {for almost all} \; t \in \mathbb R_+. \end{aligned}$$
(2.23)
In light of (2.20), (2.22) and (2.23), we see that for every \(k \in \mathbb N\)
$$\begin{aligned} \dot{f}(t) \in (-\infty , -\eta ] \quad \text {for almost all} \; t \in [t_k, t_k+\delta ], \end{aligned}$$
which yields that
$$\begin{aligned} f(t_k + \delta ) - f(t_k) \le -\eta \delta . \end{aligned}$$
Since f is non-increasing and \(t_{k+1} \ge t_k + \delta \) for all \(k \in \mathbb N\)
$$\begin{aligned} f(t_{k+1}) - f(t_k) \le - \eta \delta , \quad \forall \, k \in \mathbb N, \end{aligned}$$
showing that
$$\begin{aligned} f(t_{N+1}) - f(t_1) = \sum _{k=1}^N \big [ f(t_{k+1}) - f(t_k) \big ]\le - \eta \delta N \rightarrow -\infty \quad \text {as } N \rightarrow \infty , \end{aligned}$$
which contradicts the nonnegativity of f.
(iii): Let \(0 < c_1 \le c_2\) be such that
$$\begin{aligned} c_1 \Vert y \Vert \le \vert y \vert _{v} \le c_2 \Vert y \Vert \quad \forall \, y \in \mathbb R^p. \end{aligned}$$
(2.24)
Fix \(\varepsilon >0\) such that \(\rho + \varepsilon <1\). Since \(\alpha (A)<0\) and as \(\mathbf{G}\) is continuous at 0, there exists \(\delta >0\) such that
$$\begin{aligned} 0< \gamma< \delta \quad \Rightarrow \quad \alpha (A+\gamma I)<0 \quad \text {and} \quad \Vert \mathbf{G}(-\gamma ) - \mathbf{G}(0) \Vert < \frac{\varepsilon c_1}{ c_2 R}. \end{aligned}$$
(2.25)
Thus, for fixed \(\gamma \in (0,\delta )\), we note that \(\mathbf{G}(-\gamma ) \ge 0\) and, further, for \(y \in \mathbb R^p_+\) and \(w \in F(y)\),
$$\begin{aligned} \vert \mathbf{G}(-\gamma )w \vert _{v}&= v^T\mathbf{G}(-\gamma )w = v^T[\mathbf{G}(-\gamma )-\mathbf{G}(0)]w + \vert \mathbf{G}(0)w \vert _{v} \nonumber \\&\le \vert [\mathbf{G}(-\gamma )-\mathbf{G}(0)]w \vert _{v} + \vert \mathbf{G}(0)w \vert _{v}\nonumber \\&\le c_2 \Vert \mathbf{G}(-\gamma ) - \mathbf{G}(0) \Vert \cdot \Vert w \Vert + \rho \vert y \vert _{v} \le \frac{\varepsilon c_1}{R}\Vert w \Vert + \rho \vert y \vert _{v} \le (\varepsilon + \rho ) \vert y \vert _{v}\nonumber \\&\le \vert y \vert _{v}, \end{aligned}$$
(2.26)
where we have used (A3), (2.11), (2.24) and (2.25). Multiplying both sides of (2.26) by \(\mathrm {e}^{\gamma t}\), and taking \(w \in F(\mathrm {e}^{-\gamma t} \xi )\) with \(\xi \in \mathbb R^p_+\), we see that
$$\begin{aligned} \vert \mathrm{e}^{\gamma t}\mathbf{G}(-\gamma )w \vert _{v} \le \mathrm {e}^{\gamma t} \cdot \vert \mathrm {e}^{-\gamma t} \xi \vert _{v} = \vert \xi \vert _{v} \quad \forall \, w \in F(\mathrm {e}^{-\gamma t} \xi ), \quad \forall \, \xi \in \mathbb R^p_+, \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
(2.27)
Define \(z(t):= \mathrm {e}^{\gamma t} x(t)\) for \(t \in \mathbb R_+\). A routine calculation using (1.1) shows that z is a solution of
$$\begin{aligned} \dot{z}(t) - (A + \gamma I)z(t) \in B \mathrm {e}^{\gamma t} F(\mathrm {e}^{-\gamma t} Cz(t)), \quad z(0) = x^0, \quad t \in \mathbb R_+. \end{aligned}$$
(2.28)
Multiplying both sides of (2.28) by \(-v^T C(A + \gamma I)^{-1} \) and invoking (2.27), we obtain the estimate
$$\begin{aligned} \theta _\gamma v^T C z(t) \le -v^T C(A + \gamma I)^{-1} z(t) \le -v^T C(A + \gamma I)^{-1} x^0 \quad \forall \, t \in \mathbb R_+, \end{aligned}$$
for some \(\theta _\gamma >0\), so that there exists \(L >0\) such that
$$\begin{aligned} \Vert Cz(t)\Vert \le L \Vert x^0\Vert \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
Applying Lemma 2.1 to (2.28) with \(D = \{0\}\) and \(G(t,Cx(t)) = F(Cx(t))\) shows that
$$\begin{aligned} z(t) - \mathrm {e}^{(A + \gamma I)t} x^0 \in \int _0^t \mathrm {e}^{(A + \gamma I)(t-\tau )}B \mathrm{e}^{\gamma \tau } F(\mathrm{e}^{-\gamma \tau }Cz(\tau )) \, d\tau \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
Invoking the boundedness of Cz and (A3) we see that
$$\begin{aligned} \Vert z(t) \Vert&\le \Vert \mathrm {e}^{(A + \gamma I)t} x^0 \Vert + \int _0^t {\left| \left| \left| \mathrm {e}^{(A + \gamma I)(t-\tau )}B \mathrm {e}^{\gamma \tau } F( \mathrm {e}^{-\gamma \tau } Cz(\tau )) \right| \right| \right| } \, d\tau \\&\le \left( \Vert \mathrm {e}^{(A + \gamma I)t} \Vert + L R \int _0^t \left\| \mathrm {e}^{(A + \gamma I)(t-\tau )}B \right\| \, d\tau \right) \Vert x^0\Vert \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
Since \(\alpha (A + \gamma I)<0\) and \(x^0 \in \mathbb R^n_+\) was arbitrary, we conclude that there exists \(\varGamma >0\) such that
$$\begin{aligned} \Vert z(t) \Vert \le \varGamma \Vert x^0\Vert \quad \forall \, t \in \mathbb R_+, \; \forall \, x^0 \in \mathbb R^n_+, \end{aligned}$$
and so
$$\begin{aligned} \Vert x(t) \Vert \le \varGamma \mathrm {e}^{-\gamma t} \Vert x^0\Vert \quad \forall \, t \in \mathbb R_+, \; \forall \, x^0 \in \mathbb R^n_+, \end{aligned}$$
as required. \(\square \)
In certain cases, the weighted one-norm estimate (2.9) itself implies that F must satisfy (A3), which we formulate as the next lemma.
Lemma 2.7
Assume that A, B, C and F satisfy (A1) and (A2), and let \(\mathbf{G}(s) = C(sI-A)^{-1}B\). If (2.9), and at least one of the two conditions
-
(a)
\(m =p\) and \(\mathbf{G}(0)\) is irreducible;
-
(b)
\(B,C \ne 0\), B has no zero columns and there exists \(\varDelta \in \mathbb R^{m \times p}_+\) such that \(A + B\varDelta C\) is irreducible;
hold, then F satisfies (A3).
Proof
Assume that (a) holds. By the Perron–Frobenius theorem, irreducibility of \(\mathbf{G}(0)\) implies that there exists a strictly positive \(\nu \in \mathbb R^m_+\) and \(r >0\) such that
$$\begin{aligned} \nu ^T \mathbf{G}(0) = r \nu ^T \quad \text {and so} \quad \vert \mathbf{G}(0)z \vert _{\nu } = r \vert z \vert _{\nu } \quad \forall \, z \in \mathbb R^m_+. \end{aligned}$$
The above equality, combined with norm equivalence of all norms on \(\mathbb R^m\), implies that there exists \(\theta _1, \theta _2 >0\) such that
$$\begin{aligned} \theta _1 \Vert z \Vert \le \Vert \mathbf{G}(0) z \Vert \le \theta _2 \vert \mathbf{G}(0) z \vert _{v} \quad \forall \, z \in \mathbb R^m_+. \end{aligned}$$
(2.29)
Thus, by (2.9) and (2.29)
$$\begin{aligned} \theta _1 \Vert w \Vert \le \Vert \mathbf{G}(0) w \Vert \le \theta _2 \vert \mathbf{G}(0) w \vert _{v} \le \theta _2 \vert y \vert _{v} \le c \theta _2 \Vert y \Vert \quad \forall \, w \in F(y), \; \; \forall \, y \in \mathbb R^p_+, \end{aligned}$$
for some \(c >0\), which demonstrates that (A3) holds with \(R : = c\theta _2/\theta _1\).
Now assume that (b) holds. Suppose that \(z \in \mathbb R^m_+\) is such that \( \mathbf{G}(0)z =0\). Since \(\alpha (A) <0\) we have that
$$\begin{aligned} 0 = \mathbf{G}(0)z = -CA^{-1}B z = \int _0^\infty C \mathrm {e}^{(A + \delta I)t} \mathrm {e}^{-\delta t}B z \,dt \ge 0, \end{aligned}$$
(2.30)
where \(\delta >0\) is such that \(0 \le A + \delta I\) (such a \(\delta \) exists as A is Metzler). As the integrand is continuous and nonnegative, (2.30) implies that
$$\begin{aligned} 0 = C \mathrm {e}^{(A+ \delta I)t} B z = \sum _{k=0}^\infty \frac{t^k}{k!} C (A+\delta I)^k B z \quad \forall \, t\ge 0, \end{aligned}$$
whence,
$$\begin{aligned} C (A+\delta I)^k B z = 0 \quad \forall \, k \in \mathbb N_0, \end{aligned}$$
and, therefore,
$$\begin{aligned} C (A+\delta I + B\varDelta C)^k B z =0\quad \forall \, k \in \mathbb N_0. \end{aligned}$$
(2.31)
Let \(c_i^T \ne 0\) and \(b_j\) denote the i-th row and j-th column of C and B, respectively, where we have used that \(C \ne 0\). For each \(r \in \underline{m}\), we see from (2.31) that
$$\begin{aligned} c_i^T (A + \delta I+ B \varDelta C)^k b_r z_r = 0 \quad \forall \, k \in \mathbb N_0. \end{aligned}$$
(2.32)
As \(b_r \ne 0 \) for every \(r \in \underline{m}\) and \(A + \delta I+ B \varDelta C\) is irreducible, it follows from (2.32), by appropriate choices of \(k \in \underline{n}\), that \(z_r = 0\). Therefore, we deduce that \(z=0\) as \(r \in \underline{m}\) was arbitrary and thus \(\mathbf{G}(0)\) has no zero columns. Defining
$$\begin{aligned} \beta = \min _{j \in \underline{m}} \left( \sum _{i=1}^p [\mathbf{G}(0)]_{ij}\right) >0, \end{aligned}$$
we obtain, for \(z\in \mathbb R^m_+\),
$$\begin{aligned} \Vert \mathbf{G}(0) z \Vert _1 = \sum _{i=1}^p (\mathbf{G}(0) z)_i = \sum _{j=1}^m \left( \sum _{i=1}^p [\mathbf{G}(0)]_{ij} \right) z_j \ge \beta \sum _{j=1}^m z_j = \beta \Vert z\Vert _1. \end{aligned}$$
We deduce that (2.29) holds, for some \(\theta _1,\theta _2 >0\) which, when combined with (2.9) and using arguments identical to those used in the first part of the proof, establishes the claim. \(\square \)
The condition (2.10) is an intermediate between (2.9) and (2.11) which, as the next result shows, simplifies if more regularity assumptions are made on F.
Corollary 2.8
Assume that (A1)–(A3) hold and that F is upper semi-continuous with closed values. If there exists a strictly positive \(v \in \mathbb R^p_+\) such that
$$\begin{aligned} \quad \vert \mathbf{G}(0) w \vert _{v} < \vert y \vert _{v} \quad \forall \; w \in F(y) \; \; \forall \, y \in \mathbb R^p_+{\setminus }\{0\}, \end{aligned}$$
(2.33)
then, for all \(x^0 \in \mathbb R^n_+\), every global solution x of (1.1) satisfies \(x(t) \rightarrow 0\) as \(t \rightarrow \infty \).
Proof
We note that it is sufficient to show that (2.22) holds, because in this case the proof of the corollary may be completed by arguments identical to those used in the proof of statement (ii) of Theorem 2.5. To see that (2.22) holds, let \(y_k \in \mathcal {M}\) and \(w_k \in F(y_k)\) be such that
$$\begin{aligned} v^T y_k - v^T \mathbf{G}(0) w_k \rightarrow \inf \big \{ v^T \xi - v^T\mathbf{G}(0)w {:} \, w \in F(\xi ), \; \xi \in \mathcal {M}\big \} \quad \text {as } k \rightarrow \infty , \end{aligned}$$
where \(\mathcal {M}\) is given by (2.21). Since \(y_k \in \mathcal {M}\) and \(\mathcal {M}\) is compact, \((y_k)_{k \in \mathbb N}\) has a convergent subsequence, not relabelled, with limit \(y_* \in \mathcal {M}\), hence \(y_* \ne 0\). The upper semi-continuity of F means that we may choose another subsequence of \((y_k)_{k \in \mathbb N}\), again not relabelled, such that
$$\begin{aligned} w_k \in F(y_k) \subseteq F(y_*) + B(0,1/k), \end{aligned}$$
(2.34)
where \(B(x,r) \subseteq \mathbb R^m\) denotes the open ball centred at x with radius \(r >0\). Assumption (A3) implies that \(F(y_*)\) is bounded and so, by (2.34), \((w_k)_{k \in \mathbb N}\) is bounded, and hence has a convergent subsequence, not relabelled, with limit \(w_*\). Necessarily, from (2.34) we see that \(w_* \in \overline{F(y_*)} = F(y_*)\), as F is assumed to be closed valued. We conclude that
$$\begin{aligned} \inf \big \{ v^T \xi - v^T\mathbf{G}(0)w {:} \, w \in F(\xi ), \; \xi \in \mathcal {M}\big \}= & {} \lim _{k \rightarrow \infty } \big [ v^T y_k - v^T \mathbf{G}(0) w_k \big ] \\= & {} v^T y_* - v^T \mathbf{G}(0) w_* >0, \end{aligned}$$
by (2.33) as \(y_* \ne 0\), showing that (2.22) holds. \(\square \)
Although formulated for autonomous problems, Theorem 2.5 readily extends to the non-autonomous differential inclusion (2.5) (still with \(D=\{0\}\)), provided the conditions in Theorem 2.5 hold uniformly in time. We formulate these claims in the next corollary.
Corollary 2.9
Given the Lur’e inclusion (2.5) with \(D = \{0\}\), assume that A, B, C and G satisfy (A1)–(A3). Define
$$\begin{aligned} F : \mathbb R^p_+ \rightarrow P_0(\mathbb R_+^m), \quad F(y) \;:= \bigcup _{t \ge 0} G(t,y). \end{aligned}$$
(2.35)
If F given by (2.35) satisfies the conditions (2.9)–(2.11) in Theorem 2.5, then the conclusions of Theorem 2.5 hold for every global solution of (2.5) with \(D = \{0\}\).
Forced systems
We next consider the system of forced Lur’e differential inclusions (1.3), where \(D : \mathbb R_+ \rightarrow P_0(\mathbb R^n_+)\). We say that \(D : \mathbb R_+ \rightarrow P_0(\mathbb R^n_+)\) is (essentially) locally bounded or (essentially) bounded if
$$\begin{aligned} \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \in [0, t]} {\left| \left| \left| D(\tau ) \right| \right| \right| }< \infty \quad \forall \, t \in \mathbb R_+ \quad \text {or} \quad \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \in [0, \infty )} {\left| \left| \left| D(\tau ) \right| \right| \right| } < \infty , \end{aligned}$$
respectively.
With regards to the existence of solutions of (1.3), we refer to, for example [26, Chap. 3, Sections 5–6] or [67, Theorems 1.1, 1.3, pp. 30–31]. The following result is based on [26, Theorem 5.2].
Proposition 2.10
Given the forced Lur’e inclusion (1.3), assume that:
-
(i)
(A1)–(A3) hold;
-
(ii)
F has closed, convex values, \(F(t, \cdot )\) is upper semi-continuous and \(F(\cdot ,x)\) is measurable;
-
(iii)
D is measurable, locally bounded with closed, convex values.
Then, for every \(x^0 \in \mathbb R^n_+\), (1.3) has a global solution, and every solution may be extended to a global solution. Further, every solution x satisfies \(x(t) \in \mathbb R^n_+\) for all t where x(t) is defined.
Proof
Existence of global solutions follows from an application of [26, Theorem 5.2] and [26, Corollary 5.2, p. 58] ensures that solutions may be extended to global solutions. Lemma 2.2 ensures that every solution is nonnegative. \(\square \)
Assumption (A3) implies that \(F(0) = \{0\}\) and hence \(x=0\), \(D = \{0\}\) is a solution of (1.3) with \(x^0 = 0\), which we shall hereafter refer to as the zero equilibrium pair. We proceed to state and prove the main result of the present section, namely that the assumptions made in statement (iii) of Theorem 2.5 are sufficient for the zero equilibrium pair of (1.3) to be so-called exponentially ISS. The proof is similar to that of statement (iii) of Theorem 2.5, and uses a so-called exponential weighting argument, see [36].
As will become apparent in the proof of Corollary 2.12, it is convenient in the next result to impose no assumption on sign of the disturbance term D, but instead assume that the state x remains nonnegative.
Theorem 2.11
Given the forced Lur’e inclusion (1.3), assume that (A1)–(A3) hold. If there exist a strictly positive \(v \in \mathbb R^p_+\) and \(\rho \in (0,1)\) such that (2.11) holds, then there exists \(\varGamma , \gamma >0\) such that, for all \(x^0 \in \mathbb R^n_+\), all locally bounded \(D : \mathbb R_+ \rightarrow P_0(\mathbb R^n)\) and every global nonnegative solution x of (1.3),
$$\begin{aligned} \Vert x(t) \Vert \le \varGamma \Big ( \mathrm {e}^{-\gamma t} \Vert x^0\Vert + \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \in [0, t]}{\left| \left| \left| D(\tau ) \right| \right| \right| }\Big )\quad \forall \, t \in \mathbb R_+. \end{aligned}$$
(2.36)
If D is nonnegative-valued, that is \(D(t) \subseteq \mathbb R^n_+\) for almost all \(t \ge 0\), then the hypotheses of Theorem 2.11 ensure that (2.36) holds for every global solution x of (1.3). The inequality (2.36) is the definition of exponential ISS of the zero equilibrium pair (in the current set-valued setting).
Proof of Theorem 2.11
Fix \(x^0 \in \mathbb R^n_+\) and \(D : \mathbb R_+ \rightarrow P_0(\mathbb R^n)\) as above and let x be a global, nonnegative solution of (1.3). Let \(\varepsilon , \gamma >0\) be such that \(\rho + \varepsilon <1\) and \(\alpha (A + \gamma I) <0 \). Defining \(z(t) := \mathrm {e}^{\gamma t}x(t)\) for \(t \ge 0\), which is also nonnegative, an elementary calculation using (1.3) shows that z is a solution of
$$\begin{aligned} \dot{z}(t) - (A + \gamma I)z(t) \in B \mathrm {e}^{\gamma t} F(\mathrm {e}^{-\gamma t} Cz(t)) + \mathrm {e}^{\gamma t} D(t), \quad z(0) = x^0, \quad t \in \mathbb R_+. \end{aligned}$$
(2.37)
Multiplying both sides of the above by \(v^T C(-\gamma I - A)^{-1} \) yields that, for almost all \(t \in \mathbb R_+\)
$$\begin{aligned} v^T C(-\gamma I - A)^{-1} \dot{z}(t) + v^T Cz(t) \in v^T\mathbf{G}(-\gamma ) \mathrm {e}^{\gamma t} F(\mathrm {e}^{-\gamma t} Cz(t)) + H(t) \end{aligned}$$
(2.38)
where
$$\begin{aligned} H(t) := v^T C(-\gamma I - A)^{-1}\mathrm {e}^{\gamma t} D(t) \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
Using arguments similar to those involved in the derivation of (2.26), we see that for sufficiently small \(\gamma >0\)
$$\begin{aligned} v^T \mathbf{G}(-\gamma ) \mathrm {e}^{\gamma t} F(\mathrm {e}^{-\gamma t} Cz(t)) \subseteq [0, (\rho + \varepsilon ) v^T Cz(t)] \quad \forall \, t \in \mathbb R_+, \end{aligned}$$
(2.39)
whence, for almost all \(t \in \mathbb R_+\)
$$\begin{aligned} v^T\mathbf{G}(-\gamma ) \mathrm {e}^{\gamma t} F(\mathrm {e}^{-\gamma t} Cz(t)) + H(t) \subseteq [0, (\rho + \varepsilon ) v^T Cz(t)] + H(t) \end{aligned}$$
(2.40)
Setting \(\delta = 1 - (\rho + \varepsilon ) \in (0,1)\) and
$$\begin{aligned} \left. \begin{aligned} E(t)&:= [-v^T C z(t), -\delta v^T C z(t)] \\ a(t)&: = - \delta v^T \int _0^t C z(\tau ) \, d\tau \\ \xi (t)&:= \kappa \, \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \in [0,t]} {\left| \left| \left| D (\tau ) \right| \right| \right| } \int _0^t \mathrm{e}^{\gamma \tau } \, d \tau \end{aligned} \right\} \quad \forall \,t \in \mathbb R_+, \end{aligned}$$
(2.41)
where \(\kappa := \Vert v^T C(\gamma I +A)^{-1} \Vert >0\), it follows from (2.38)–(2.40) that
$$\begin{aligned} v^T C(-\gamma I - A)^{-1} \dot{z}(\tau ) \in E(\tau ) + H(\tau ) \quad \text {for almost all } \tau \in \mathbb R_+, \end{aligned}$$
so that
$$\begin{aligned} v^T C(-\gamma I - A)^{-1} z(t) - v^T C(-\gamma I - A)^{-1} x^0 \in \int _0^t \big ( E(\tau ) + H(\tau ) \big ) \, d \tau \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
(2.42)
Choose \(\sigma \in (0, \delta )\), and note that by definition of a and monotonicity of the integral
Be definition of a and \(\xi \), it follows from (2.42) that
by (2.43). Now, as \(0 \le v^T C(-\gamma I - A)^{-1} z(t)\), appealing to (2.41) yields that
$$\begin{aligned} \Vert Cz \Vert _{L^1(0,t)} \le K_1 \Vert x_0 \Vert + K_2 \mathrm{e}^{\gamma t} \sup _{\tau \in [0,t]} {\left| \left| \left| D(\tau ) \right| \right| \right| } \quad \forall \, t \in \mathbb R_+, \end{aligned}$$
(2.44)
for some positive constants \(K_1\) and \(K_2\), which are independent of t, D and \(x^0\).
An application of Lemma 2.1 to (2.37) implies that
$$\begin{aligned} z(t) - \mathrm {e}^{(A + \gamma I)t} x^0 \in \int _0^t \mathrm {e}^{(A + \gamma I)(t-\tau )}\left[ B \mathrm {e}^{\gamma \tau } F( \mathrm {e}^{-\gamma \tau } Cz(\tau )) + \mathrm {e}^{\gamma \tau } D(\tau ) \right] \, d\tau \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
Invoking (A3) and \(\alpha (A+\gamma I) <0\), we estimate \(\Vert z(t) \Vert \) using the above inclusion as follows
$$\begin{aligned} \Vert z(t) \Vert&\le K_3 \Vert x^0\Vert + K_4 \Vert Cz \Vert _{L^1(0,t)} + K_5 \mathrm {e}^{\gamma t} \sup _{\tau \in [0,t]} {\left| \left| \left| D(\tau ) \right| \right| \right| } \nonumber \\&\le (K_3 + K_4K_1) \Vert x^0\Vert + (K_4K_2 + K_5) \mathrm{e}^{\gamma t}\mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \in [0,t]} {\left| \left| \left| D(\tau ) \right| \right| \right| } \quad \forall \, t \in \mathbb R_+, \end{aligned}$$
by (2.44), for some \(K_3,K_4,K_5 >0\) which are independent of t, D and \(x^0\). Thus,
$$\begin{aligned} \Vert x(t) \Vert \le \varGamma (\mathrm{e}^{-\gamma t} \Vert x^0\Vert + \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \in [0,t]} {\left| \left| \left| D(\tau ) \right| \right| \right| } ) \quad \forall \, t \in \mathbb R_+, \end{aligned}$$
where \(\varGamma := \max \big \{ K_3 + K_4K_1,K_4K_2 + K_5\big \}\), completing the proof. \(\square \)
The next corollary is a so-called ISS with bias result (see [36, 48]) which states that if the condition (2.11) fails on a bounded set, then solutions of (1.3) (which include those of (1.1)) still admit uniform estimates of the form (2.36), but with an additional positive constant term. We note that ISS with bias is closely related to the concept of input-to-state practical stability (ISpS), see [68, 69]. Although ISpS applies to more general nonlinear forced control systems, a difference is that it does not typically specify the form of the additional constant, denoted \(\beta \) in the bounds below.
Corollary 2.12
Given the forced differential Lur’e inclusion (1.3), assume that (A1)–(A3) hold. If there exist a strictly positive \(v \in \mathbb R^p_+\), \(\rho \in (0,1)\) and \(\varTheta \ge 0\) such that
$$\begin{aligned} \vert \mathbf{G}(0) w \vert _{v}\le \rho \vert y \vert _{v} \quad \forall \, w \in F(y) \; \; \forall \, y \in \mathbb R^p_+, \; \Vert y \Vert \ge \varTheta , \end{aligned}$$
(2.45)
then there exist \(\varGamma , \gamma >0\) such that, for all \(x^0 \in \mathbb R^n_+\) and all locally bounded \(D : \mathbb R_+ \rightarrow P_0(\mathbb R^n_+)\), every global solution x of (1.3) satisfies
$$\begin{aligned} \Vert x(t) \Vert \le \varGamma \Big ( \mathrm {e}^{-\gamma t} \Vert x^0\Vert + \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \in [0, t]}{\left| \left| \left| D(\tau ) \right| \right| \right| } + \beta \Big )\quad \forall \, t \in \mathbb R_+. \end{aligned}$$
(2.46)
Here
$$\begin{aligned} \beta :=\Vert B\Vert \sup _{\Vert y\Vert \le \theta } \left( \sup _{w\in F(y)}\Big (\mathrm{dist}{ }\big (w,S(y) \big ) \Big ) \right) , \end{aligned}$$
(2.47)
and
$$\begin{aligned} S(y) := \big \{w \in F(y) {:} \, |\mathbf{G}(0)w|_v\le \rho |y|_v \big \} \subseteq F(y). \end{aligned}$$
(2.48)
The number \(\beta \) in (2.47) seeks to capture the extent to which the inequality in (2.45) is violated on the set \(\{ y \in \mathbb R^p_+ {:} \, \Vert y \Vert < \varTheta \}\). Observe that if \(\varTheta = 0\), then \(\beta =0\) and the conclusions of Theorem 2.11 and Corollary 2.12 coincide.
Proof of Corollary 2.12
Define \(H : \mathbb R^p_+ \rightarrow P_0(\mathbb R^m_+)\) by
$$\begin{aligned} H(y) = \left\{ \begin{aligned}&F(y)&\Vert y \Vert&\ge \varTheta , \\&S(y)&\Vert y \Vert&< \varTheta , \end{aligned} \right. \end{aligned}$$
(2.49)
where S(y) is given by (2.48), so that H satisfies (A3). For given \(x^0 \in \mathbb R^n_+\) and \(D : \mathbb R_+ \rightarrow P_0(\mathbb R^n_+)\), let x denote a global solution of (1.3). As \(D(t) \subseteq \mathbb R^n_+\) for almost all \(t \ge 0\), we have that \(x \ge 0\) and thus x is also a global nonnegative solution of
$$\begin{aligned} \dot{x} - Ax \in BH(Cx) + B\big [F(Cx) - H(Cx)\big ] + D = B H(Cx) + E, \quad x(0) = x^0, \end{aligned}$$
(2.50)
where \(E := B\big [F(Cx) - H(Cx)\big ] + D\). We seek to apply Theorem 2.11 to (2.50), with F and D in (1.3) replaced by H and E, respectively. Although it is possible that \(E(t) \not \subseteq \mathbb R^n_+\) for some \(t \ge 0\), since x is nonnegative it suffices to verify that E is locally bounded. We consider two exhaustive cases: if \(\Vert Cx(t) \Vert \ge \varTheta \), then
$$\begin{aligned} \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \in [0,t]}{\left| \left| \left| E(t) \right| \right| \right| } = \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \in [0,t]} {\left| \left| \left| D(\tau ) \right| \right| \right| } < \infty , \end{aligned}$$
by construction. Alternatively, if \(\Vert Cx(t) \Vert < \varTheta \)
$$\begin{aligned} \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \in [0,t]}{\left| \left| \left| E(t) \right| \right| \right| }&\le \sup _{t \in [0, \tau ]} {\left| \left| \left| B\big [F(Cx(t)) - H(Cx(t))\big ] \right| \right| \right| } + \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \in [0,t]} {\left| \left| \left| D(\tau ) \right| \right| \right| } \nonumber \\&\le \beta + \mathop {{{\mathrm{ess\,sup}}}}\limits _{\tau \in [0,t]} {\left| \left| \left| D(\tau ) \right| \right| \right| } < \infty . \end{aligned}$$
(2.51)
Therefore, the hypotheses of Theorem 2.11 hold which, when combined with (2.51), yield the existence of \(\varGamma , \gamma >0\) such that x satisfies the estimate (2.46). \(\square \)
Positive Lur’e differential equations and inequalities
As indicated in the Introduction, the stability (or otherwise) of equilibria of Lur’e differential equations, or simply Lur’e systems, has been the focus of much attention in the control theory literature. The forced positive Lur’e system
$$\begin{aligned} \dot{x}(t) = Ax(t) + B g(t,Cx(t)) +d(t), \quad x(0) = x^0, \quad t \in \mathbb R_+, \end{aligned}$$
(2.52)
where \(g : \mathbb R_+ \times \mathbb R^p_+ \rightarrow \mathbb R^m_+\) and \(d : \mathbb R_+ \rightarrow \mathbb R^n_+\), is a special case of the forced positive Lur’e inclusion (1.3) with
$$\begin{aligned} F(y) \; := \bigcup _{t \in \mathbb R_+} \{g(t,y)\} \quad \forall \, y \in \mathbb R^p_+ \quad \text {and} \quad D(t) := \{ d(t)\} \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
In certain applications, the so-called forced positive Lur’e inequality
$$\begin{aligned} 0 \le \dot{x}(t) - Ax(t) \le B g(t,Cx(t)) +d(t), \quad x(0) = x^0, \quad t \in \mathbb R_+, \end{aligned}$$
(2.53)
is also of interest. Note that an absolutely continuous function \(x: \mathbb R_+ \rightarrow \mathbb R^n\) is a solution of (2.53) if, and only if, x is a solution of (1.3) with
$$\begin{aligned} F(y) \; := \bigcup _{t \in \mathbb R_+} [0,g(t,y)] \quad \forall \, y \in \mathbb R^p_+ \quad \text {and} \quad D(t) := [0, d(t)] \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
In both of the special cases above, the set-valued map F satisfies assumption (A3) if, and only if, there exists \(R>0\) such that
$$\begin{aligned} \Vert g(t,y)\Vert \le R \Vert y \Vert \quad \forall \, t \in \mathbb R_+, \; \; \forall \, y \in \mathbb R^p_+. \end{aligned}$$
(2.54)
We note that (2.52) is a particular case of (2.53), and so focus attention on (2.53). The following corollary is an immediate consequence of Theorems 2.5, 2.11 and Corollary 2.12.
Corollary 2.13
Given the forced positive Lur’e inequality (2.53), assume that (A1) and (A2) hold and that g satisfies (2.54).
-
(i)
If \(d=0\) and there exists a strictly positive \(v \in \mathbb R^p_+\) such that
$$\begin{aligned} \vert \mathbf{G}(0) g(t,y) \vert _{v} \le \vert y \vert _{v} \quad \forall \, t \in \mathbb R_+, \; \; \forall \, y \in \mathbb R^p_+, \end{aligned}$$
(2.55)
then there exists \(\varGamma >0\) such that, for all \(x^0 \in \mathbb R^n_+\), every global solution x of (2.53) satisfies
$$\begin{aligned} \Vert x(t) \Vert \le \varGamma \Vert x^0\Vert \quad \forall \, t \in \mathbb R_+. \end{aligned}$$
-
(ii)
If \(d=0\) and there exist a strictly positive \(v \in \mathbb R^p_+\) and a lower semi-continuous function \(e : \mathbb R^p_+ \rightarrow \mathbb R_+\) such that
$$\begin{aligned} e(y) \; >0 \quad \text {and} \quad \vert \mathbf{G}(0) g(t,y) \vert _{v} + e(y) \; \le \vert y \vert _{v} \quad \forall \, t \in \mathbb R_+, \; \; \forall \, y \in \mathbb R^p_+\backslash \{0\}\, \end{aligned}$$
(2.56)
then, for all \(x^0 \in \mathbb R^n_+\), every global solution x of (2.53) satisfies \(x(t) \rightarrow 0\) as \(t \rightarrow \infty \).
-
(iii)
If there exist a strictly positive \(v \in \mathbb R^p_+\), \(\rho \in (0,1)\) and \(\varTheta \ge 0\) such that
$$\begin{aligned} \vert \mathbf{G}(0) g(t,y) \vert _{v} \le \rho \vert y \vert _{v} \quad \forall \, t \in \mathbb R_+, \; \; \forall \, y \in \mathbb R^p_+ \; \Vert y \Vert \ge \varTheta , \end{aligned}$$
(2.57)
then there exist \(\varGamma , \gamma >0\) such that, for all \(x^0 \in \mathbb R^n_+\) and all \(d \in L^\infty _\mathrm{loc}(\mathbb R_+ ; \mathbb R^n_+)\), every global solution x of (2.53) satisfies
$$\begin{aligned} \Vert x(t) \Vert \le \varGamma \Big ( \mathrm {e}^{-\gamma t} \Vert x^0\Vert + \Vert d(\tau )\Vert _{L^\infty (0,t)} + \beta \Big )\quad \forall \, t \in \mathbb R_+. \end{aligned}$$
Here
$$\begin{aligned} \beta = \beta (\varTheta ) = \Vert B \Vert \sup _{\Vert y \Vert \le \varTheta } \left( \sup _{t \ge 0} \Big (\mathrm{dist}{ }\big (g(t, y),T(t,y)\big )\Big ) \right) , \end{aligned}$$
and
$$\begin{aligned} T(t,y) := \big \{ w \in [0, g(t,y)] \subseteq \mathbb R^m_+ {:} \, |\mathbf{G}(0)w|_v\le \rho |y|_v \big \}. \end{aligned}$$
The following remark provides some commentary on the above corollary.
Remark 2.14
As the conditions (2.55)–(2.57) are assumed to hold uniformly in the first variable, for ease of presentation the following comments suppress this variable.
-
(a)
In the situation where \(m=p=1\) and g is continuous, the conditions (2.55), (2.56) and (2.57) (the latter with \(\varTheta =0\)) simplify to:
$$\begin{aligned} \mathbf{G}(0) g(y) \le y, \quad \mathbf{G}(0) g(y) < y \; (\text {for }y >0) \quad \text {and} \quad \mathbf{G}(0) g(y) \le \rho y \quad \forall \, y \in \mathbb R_+, \end{aligned}$$
respectively.
-
(b)
When \(m=p\), a sufficient condition for (2.55) or (2.57) are the inequalities
$$\begin{aligned} \Vert \mathbf{G}(0) \Vert _v \Vert g\Vert _v \le 1 \quad \text {or} \quad \Vert \mathbf{G}(0) \Vert _v \Vert g\Vert _v < 1, \end{aligned}$$
(2.58)
respectively, where
$$\begin{aligned} \Vert \mathbf{G}(0) \Vert _v := \sup _{\begin{array}{c} \xi \in \mathbb R^m_+ \\ \xi \ne 0 \end{array}} \frac{\vert \mathbf{G}(0)\xi \vert _{v}}{\vert \xi \vert _{v}} \quad \text {and} \quad \Vert g \Vert _v := \sup _{\begin{array}{c} \xi \in \mathbb R^m_+ \\ \xi \ne 0 \end{array}} \frac{\vert g(\xi ) \vert _{v}}{\vert \xi \vert _{v}}, \end{aligned}$$
are the induced v-norms. The inequalities in (2.58) are reminiscent of classical small-gain conditions, only here formed in the induced v-norm. We note that in general, \(\Vert \mathbf{G}(0) \Vert _v \ne \Vert \mathbf{G}(0) \Vert _2 = \Vert \mathbf{G}\Vert _{H^\infty }\), where the final equality is a property enjoyed by linear positive systems; see, for example [24, Theorem 5].
-
(c)
The conclusions of statements (i) and (ii) of Corollary 2.13 are similar to those in [58, Theorem 7.2] or [1, Theorem 5.6, p. 156], where a linear dissipativity theory approach to the absolute stability of positive Lur’e systems is taken. In [58, Theorem 7.2] the authors assume that the pair (C, A) is observable, and that the nonlinearity g satisfies \(0 \le g(y) \; \le My\) for all \(y \in \mathbb R^p_+\), for some nonnegative matrix \(M \in \mathbb R^{m \times p}_+\). Further, it is assumed that the triple (A, B, C) is exponentially linearly dissipative with respect to the supply rate \(s(u,y) = \mathbb {1}^T u - \mathbb {1}^T My\), which is equivalent to the inequality
$$\begin{aligned} \mathbb {1}^T M\mathbf{G}(0) \ll \mathbb {1}^T. \end{aligned}$$
(2.59)
The inequality (2.59) is itself equivalent to \(\vert M \mathbf{G}(0) \vert _{\mathbb {1}} = \Vert M \mathbf{G}(0) \Vert _1 < 1\) and may be interpreted as a small-gain condition in the induced the one-norm. Although not directly comparable, our norm conditions in (i)–(iii) are more general as they allow a small-gain condition in a weighted one-norm induced by any strictly positive vector, not just the usual one-norm, see Example 4.3. Finally, we remark that [58] focusses on unforced systems only. \(\square \)