1 Introduction

One of the main properties of chaotic dynamical systems is that different orbits starting close by will move apart as time evolves and this divergence can be very fast. In particular, whenever performing numerical experiments, small numerical errors, which are inherent to the process due to computer round off errors, tend to grow. Thus, a central question when performing such numerical experiments is if the resulting behaviour presented by the computer reflects the dynamics of the actual system. It turns out that for some classes of chaotic systems, like the hyperbolic ones, even though a numerical orbit containing round off errors will diverge rapidly from the true orbit with the same initial condition, there exists a different true orbit which stays near the noisy orbit. Systems exhibiting this property are said to have the shadowing property. In other words, a dynamical system has the shadowing property if close to its approximate orbits we can find exact ones. The main objective of the present paper is to present sufficient conditions under which a general class of nonautonomous nonlinear ordinary differential equations exhibits a new variant of shadowing property, the so-called conditional Lipschitz shadowing property, defined below (see Definition 2).

Let \({\mathbb {R}}^n\) and \({\mathbb {R}}^{n\times n}\) denote the n-dimensional space of real column vectors and the space of \(n\times n\) matrices with real entries, respectively. The symbol \(|\cdot |\) denotes any convenient norm on \({\mathbb {R}}^n\) and the associated induced norm on \({\mathbb {R}}^{n\times n}\). Consider the ordinary differential equation

$$\begin{aligned} x'=g(t,x), \end{aligned}$$
(1.1)

where \(g:[0, \infty ) \times {\mathbb {R}}^n \rightarrow {\mathbb {R}}^n\) is continuous. We are interested in the noncontinuable solutions of (1.1) starting at \(t=0\). It is known that these solutions are defined on intervals of type \([0,\tau )\), where \(\tau \in (0,\infty ]\) may depend on the solution x. For this reason, we will consider pseudosolutions (approximate solutions) of (1.1) on intervals of the same type in the following sense. Given \(\tau \in (0,\infty ]\), by a pseudosolution of Eq. (1.1) on \([0,\tau )\), we mean any continuously differentiable function \(y:[0,\tau )\rightarrow {\mathbb {R}}^n\) such that

$$\begin{aligned} \sigma _y:=\sup _{0\le t<\tau }|y'(t)-g(t,y(t))|<\infty . \end{aligned}$$
(1.2)

The function \(e_y:[0,\tau )\rightarrow [0,\infty )\) defined by

$$\begin{aligned} e_y(t):=|y'(t)-g(t, y(t))|\qquad \text {for } t\in [0,\tau ), \end{aligned}$$
(1.3)

will be called the error function and the quantity \(\sigma _y\) is the maximum error corresponding to y. Let us recall the definition of the standard Lipschitz shadowing property.

Definition 1

We say that Eq. (1.1) has the Lipschitz shadowing property if there exist \(\varepsilon _0>0\) and \(\kappa >0\) with the following property: if \(0<\varepsilon \le \varepsilon _0\) and y is a pseudosolution of (1.1) on \([0,\tau )\) for some \(\tau \in (0,\infty ]\) such that \(\sigma _y\le \varepsilon \), then Eq. (1.1) has a solution x on \([0,\tau )\) satisfying

$$\begin{aligned} \sup _{0\le t<\tau }|x(t)-y(t)|\le \kappa \varepsilon . \end{aligned}$$
(1.4)

The concept of Lipschitz shadowing is closely related to the stronger notion of Hyers–Ulam stability (Ulam stability), which requires that the condition in the above definition is satisfied for every \(\varepsilon >0\). For a related concept in the theory of smooth dynamical systems, see [15, Definition 1.5].

Consider the semilinear differential equation

$$\begin{aligned} x'=A(t)x+f(t,x) \end{aligned}$$
(1.5)

as a perturbation of the linear equation

$$\begin{aligned} x'=A(t)x, \end{aligned}$$
(1.6)

where \(A:[0,\infty )\rightarrow {\mathbb {R}}^{n\times n}\) and \(f:[0,\infty )\times {\mathbb {R}}^{n}\rightarrow {\mathbb {R}}^n\) are continuous. The following known result provides a sufficient condition for the Lipschitz shadowing property of (1.5) (see [2, Theorem 6]).

Theorem 1

Suppose that the linear Eq. (1.6) has an exponential dichotomy on \([0,\infty )\) (see Definition 3) and there exists \(L\ge 0\) such that

$$\begin{aligned} |f(t,x_1)-f(t,x_2)|\le L|x_1-x_2|\qquad \text {for all } t\ge 0 \text { and }x_1, x_2\in {\mathbb {R}}^n. \end{aligned}$$
(1.7)

If L is sufficiently small, then Eq. (1.5) has the Lipschitz shadowing property.

Note that Theorem 1 can be extended to the more general class of delay differential equations (see [4, Theorem 2.3]). For further related results about shadowing and Hyers–Ulam stability of ordinary differential equations and their discrete counterparts, see [1,2,3,4,5,6,7,8] and references therein.

Some recent studies (see [12, 19]) have been concerned with the shadowing (Hyers–Ulam stability) of the scalar logistic equation

$$\begin{aligned} x'=x(ax+b),\qquad a,b\in {\mathbb {R}}\setminus \{0\}, \end{aligned}$$
(1.8)

which is a particular case of Eq. (1.5) when \(n=1\), \(A(t)=b\) and \(f(t,x)=ax^2\). Note that Theorem 1 cannot be applied to Eq. (1.8) because f does not satisfy the global Lipschitz condition (1.7). In [19] it has been shown that Eq. (1.8) with \(a=-1\) and \(b=1\) is not Hyers–Ulam stable, but certain approximate solutions still can be shadowed by true solutions. Motivated by this observation, we introduce the notion of conditional Lipschitz shadowing, which does not require the validity of the Lipschitz shadowing property for all pseudosolutions, but only for those which belong to a given set \(H\subset {\mathbb {R}}^n\).

Definition 2

Let H be a nonempty subset of \({\mathbb {R}}^n\). We say that Eq. (1.1) has the conditional Lipschitz shadowing property in H if there exist \(\varepsilon _0>0\) and \(\kappa >0\) with the following property: if \(0<\varepsilon \le \varepsilon _0\) and y is a pseudosolution of (1.1) on \([0,\tau )\) for some \(\tau \in (0,\infty ]\) such that \(\sigma _y\le \varepsilon \) and \(y(t)\in H\) for all \(t\in [0,\tau )\), then Eq. (1.1) has a solution x on \([0,\tau )\) satisfying (1.4).

Evidently, the standard Lipschitz shadowing property is a special case of the conditional Lipschitz shadowing property with \(H={\mathbb {R}}^n\). A different concept of conditional shadowing for discrete nonautonomous systems in a Banach space has recently been introduced by Pilyugin [16].

In this paper, we establish two types of sufficient conditions under which certain classes of ordinary differential equations have the conditional Lipschitz shadowing property in a given set \(H\subset {\mathbb {R}}^n\).

In Sec. 2, we consider the semilinear differential equation (1.5). The main result of this part is formulated in Theorem 2, which is a generalization of Theorem 1 to the case of conditional Lipschitz shadowing. It says that Eq. (1.5) has the conditional Lipschitz shadowing property in a prescribed set H whenever its linear part has an exponential dichotomy and the nonlinearity f(tx) is Lipschitz in x in a neighborhood of H (uniformly in t) with a sufficiently small Lipschitz constant. The smallness condition on the Lipschitz constant can be expressed in terms the dichotomy constants of the linear part. The importance of the obtained sufficient condition will be shown by an application to a scalar logistic equation. In Example 1, we show that our choice of pseudosolutions is optimal.

In Sec. 3, we present sufficient conditions for the conditional Lipschitz shadowing of Eq. (1.1) in terms of the logarithmic norm of \(g_x(t,x)\), the partial derivative of g with respect to x. The logarithmic norm (Lozinskiĭ measure) of a square matrix \(A\in {\mathbb {R}}^{n\times n}\) is defined by

$$\begin{aligned} \mu (A):=\lim _{h\rightarrow 0+}\frac{|I+hA|-1}{h}\qquad \hbox { for}\ A\in {\mathbb {R}}^{n\times n}, \end{aligned}$$
(1.9)

where I is the identity matrix in \({\mathbb {R}}^{n\times n}\). It should be noted that \(\mu \) is not a norm, since it can take negative values. In the scalar case (\(n=1\)), we have that \(\mu (A)=A\). The values of \(\mu (A)\) for the standard norms in \({\mathbb {R}}^n\) can be given explicitly (see Sec. 3). The main result of this section, Theorem 5, says that if \(\mu (g_x(t,x))\) is uniformly negative (bounded away from zero) for all \(t\ge 0\) and x in a neighborhood of the given set \(H\subset {\mathbb {R}}^n\), then Eq. (1.1) has the conditional Lipschitz shadowing property in H. To the best of our knowledge, this criterion has no previous analogue. It gives a new result even in the case of the standard Lipschitz shadowing. The importance and the sharpness of the assumptions will be shown in a special case of the Kermack–McKendrick equation from epidemiology.

2 Conditional Lipschitz shadowing via Exponential Dichotomy

In this section, we give sufficient conditions for the conditional Lipschitz shadowing of the semilinear Eq. (1.5).

Let \(\varPhi \) be a fundamental matrix solution of the linear Eq. (1.6) so that its transition matrix T(ts) is given by

$$\begin{aligned} T(t,s):=\varPhi (t)\varPhi ^{-1}(s)\qquad \hbox { for}\ t,s\in [0,\infty ). \end{aligned}$$

Definition 3

We say that Eq. (1.6) has an exponential dichotomy on \([0,\infty )\) if there exist a family of projections \(\left( P(t)\right) _{t\ge 0}\) in \({\mathbb {R}}^{n\times n}\) and constants \(N, \lambda >0\) such that

$$\begin{aligned} P(t)T(t,s)=T(t,s)P(s)\qquad \hbox { for all}\ t,s\in [0,\infty ),\end{aligned}$$
(2.1)
$$\begin{aligned} |T(t,s)P(s)|\le Ne^{-\lambda (t-s)}\qquad \hbox { whenever}\ t\ge s\ge 0 \end{aligned}$$
(2.2)

and

$$\begin{aligned} |T(t,s)(I-P(s))|\le Ne^{-\lambda (s-t)}\qquad \hbox { whenever}\ 0\le t\le s. \end{aligned}$$
(2.3)

If, in addition, \(P(t)=I\) (\(P(t)=0\)) identically for \(t\ge 0\), we say that Eq. (1.6) admits an exponential contraction (exponential expansion) on \([0,\infty )\).

Throughout the paper, for \(x\in {\mathbb {R}}^n\) and \(\delta >0\), \(B_\delta (x)\) will denote the closed \(\delta \)-neighborhood of x in \({\mathbb {R}}^n\) given by \(B_\delta (x):=\{\,x\in \mathbb R^n:|y-x|\le \delta \,\}\). For \(\emptyset \ne H\subset {\mathbb {R}}^n\), the \(\delta \)-neighborhood of H is defined by

$$\begin{aligned} {\mathcal {N}}_\delta (H):=\bigcup _{x\in H} B_\delta (x). \end{aligned}$$

The main result of this section is the following generalization of Theorem 1 to the case of conditional Lipschitz shadowing.

Theorem 2

Let \(\emptyset \ne H\subset {\mathbb {R}}^n\). Suppose that Eq. (1.6) has an exponential dichotomy on \([0,\infty )\) and that there exist \(\delta , L>0\) such that

$$\begin{aligned} |f(t,x_1)-f(t,x_2)|\le L|x_1-x_2|,\qquad \text {for all } t\ge 0 \text { and }x_1,x_2\in {{\mathcal {N}}}_{\delta }(H). \end{aligned}$$
(2.4)

If

$$\begin{aligned} L<\frac{\lambda }{2N}, \end{aligned}$$
(2.5)

where \(N, \lambda >0\) are as in Definition 3, then Eq. (1.5) has the conditional Lipschitz shadowing property in H.

Proof

From (2.5), it follows that the equation

$$\begin{aligned} \frac{2N}{\lambda }(L\kappa +1)=\kappa \end{aligned}$$
(2.6)

has a unique solution \(\kappa \) given by

$$\begin{aligned} \kappa :=\bigg (\frac{\lambda }{2N}-L\biggr )^{-1}=\frac{2N}{\lambda -2NL}>0. \end{aligned}$$

Set \(\varepsilon _0:=\kappa ^{-1}\delta >0\). Suppose that \(0<\varepsilon \le \varepsilon _0\) and y is a pseudosolution of (1.5) on \([0,\tau )\) for some \(\tau \in (0,\infty ]\) such that \(\sigma _y\le \varepsilon \) and \(y(t)\in H\) for all \(t\in [0,\tau )\). Observe that the transformation \(z=x-y\) reduces Eq. (1.5) to the equation

$$\begin{aligned} z'=A(t)z+f(t,y(t)+z)-f(t,y(t))+h_y(t), \end{aligned}$$
(2.7)

where

$$\begin{aligned} h_y(t):=A(t)y(t)+f (t,y(t))-y'(t)\qquad \hbox { for}\ t\in [0,\tau ). \end{aligned}$$
(2.8)

Clearly, \(|h_y(t)|=e_y(t)\) for \(t\in [0,\tau )\), where \(e_y\) is the error function corresponding to the pseudosolution y of Eq. (1.5). Hence

$$\begin{aligned} \sup _{0\le t<\tau }|h_y(t)|=\sigma _y\le \varepsilon . \end{aligned}$$
(2.9)

Let \({\mathcal {C}}_b:=C([0,\tau ),{\mathbb {R}}^n)\) denote the Banach space of bounded and continuous functions \(z:[0, \tau ) \rightarrow \mathbb R^n\) equipped with the supremum norm,

$$\begin{aligned} \Vert z\Vert =\sup _{t\in [0,\tau )}|z(t)|,\qquad z\in {\mathcal {C}}_b. \end{aligned}$$

Set

$$\begin{aligned} S:=\{\,z\in {\mathcal {C}}_b:\Vert z\Vert \le \kappa \varepsilon \,\}. \end{aligned}$$

Clearly, S is a nonempty and closed subset of \({\mathcal {C}}_b\). For \(z\in S\) and \(t\in [0,\tau )\), define

$$\begin{aligned} ({\mathcal {F}} z)(t)&:=\int _0^t T(t,s)P(s)\left[ \,f(s,y(s)+z(s))-f(s,y(s))+h_y(s)\,\right] \,ds\\&\qquad -\int _t^\tau T(t,s)(I-P(s))\left[ \,f(s,y(s)+z(s))-f(s,y(s))+h_y(s)\,\right] \,ds. \end{aligned}$$

Take an arbitrary \(z\in S\) and \(s\in [0,\tau )\). Then,

$$\begin{aligned} |(y(s)+z(s))-y(s)|=|z(s)|\le \Vert z\Vert \le \kappa \varepsilon \le \kappa \varepsilon _0=\delta . \end{aligned}$$

Therefore,

$$\begin{aligned} y(s)\in H\subset {\mathcal {N}}_\delta (H)\qquad \text {and}\qquad y(s)+z(s)\in B_\delta (y(s))\subset {\mathcal {N}}_\delta (H), \end{aligned}$$
(2.10)

which, together with (2.4), implies that

$$\begin{aligned} |f(s,y(s)+z(s))-f(s,y(s))|\le L|z(s)|\le L\Vert z\Vert \le L\kappa \varepsilon . \end{aligned}$$

This, combined with (2.2), (2.3) and (2.9) (see also (2.6)) yields that

$$\begin{aligned} \begin{aligned}&|({\mathcal {F}} z)(t)|\le \int _0^t |T(t,s)P(s)| \left( \,|f(s,y(s)+z(s))-f(s,y(s))|+|h_y(s)|\,\right) \,ds\\&\qquad +\int _t^\tau |T(t,s)(I-P(s))|\left( \,|f(s,y(s)+z(s))-f(s,y(s))|+|h_y(s)|\,\right) \,ds\\&\le \left( L\kappa \varepsilon +\varepsilon \right) \biggl (\int _0^t Ne^{-\lambda (t-s)}\,ds +\int _t^\infty Ne^{-\lambda (s-t)}\,ds\biggr )\\&\le \frac{2N}{\lambda }\left( L\kappa +1\right) \varepsilon =\kappa \varepsilon , \end{aligned} \end{aligned}$$
(2.11)

for \(z\in S\) and \(t\in [0,\tau )\). We conclude that \({\mathcal {F}} z\) is well-defined and \({\mathcal {F}}(S)\subset S\).

Let \(z_1, z_2\in S\). In view of (2.10), we have that \(y(s)+z_j(s)\subset {\mathcal {N}}_\delta (H)\) for \(s\in [0,\tau )\) and \(j=1,2\). Hence,

$$\begin{aligned} |f(s,y(s)+z_1(s))-f(s,y(s)+z_2(s))|\le L|z_1(s)-z_2(s)|\le L\Vert z_1-z_2\Vert , \end{aligned}$$

for \(s\in [0, \tau )\). Consequently,

$$\begin{aligned} \begin{aligned}&|({\mathcal {F}} z_1)(t)-({\mathcal {F}} z_2)(t)| \\&\le \int _0^t |T(t,s)P(s)| \left( \,|f(s,y(s)+z_1(s))-f(s,y(s)+z_2(s))|\,\right) \,ds\\&\qquad +\int _t^\tau |T(t,s)(I-P(s))|\left( \,|f(s,y(s)+z_1(s))-f(s,y(s)+z_2(s))|\,\right) \,ds\\&\le L\Vert z_1-z_2\Vert \biggl (\int _0^t Ne^{-\lambda (t-s)}\,ds +\int _t^\infty Ne^{-\lambda (s-t)}\,ds\biggr )\\&\le \frac{2N}{\lambda }L\Vert z_1-z_2\Vert , \end{aligned} \end{aligned}$$
(2.12)

for \(t\in [0, \tau )\). Therefore, for all \(z_1,z_2\in S\),

$$\begin{aligned} \Vert {\mathcal {F}} z_1-{\mathcal {F}} z_2\Vert \le q\Vert z_1-z_2\Vert \qquad \text {with}~q:=\frac{2N}{\lambda }L<1. \end{aligned}$$

Thus, \({\mathcal {F}}:S\rightarrow S\) is a contraction and it has a unique fixed point z in S. It follows by differentiation that z is a solution of Eq. (2.7) on \([0,\tau )\). Moreover, \(z\in S\) implies that

$$\begin{aligned} \sup _{t\in [0,\tau )}|z(t)|=\Vert z\Vert \le \kappa \varepsilon . \end{aligned}$$

Therefore, \(x=z+y\) is a solution of Eq. (1.5) on \([0,\tau )\) with the desired property (1.4). The proof of the theorem is complete. \(\square \)

Remark 1

Theorem 1 is a corollary of Theorem 2 with \(H={\mathbb {R}}^n\).

Remark 2

If D is a nonempty set in \({\mathbb {R}}^n\), then its convex hull, denoted by \({\text {conv}}(D)\), is the smallest convex set in \({\mathbb {R}}^n\) which contains D. A sufficient condition for the Lipschitz condition (2.4) to hold is that f is continuously differentiable and

$$\begin{aligned} |f_x(t,x)|\le L\qquad \text {for all } t\ge 0 \text { and } x\in {\text {conv}}\left( \mathcal N_\delta (H)\right) . \end{aligned}$$
(2.13)

The following result is an improvement of Theorem 2 in the particular case when Eq. (1.6) admits an exponential contraction or exponential expansion. It shows that in these settings the smallness condition (2.5) for the Lipschitz constant L can be weakened.

Theorem 3

Let \(\emptyset \ne H\subset {\mathbb {R}}^n\). Suppose that Eq. (1.6) has an exponential contraction or exponential expansion on \([0,\infty )\) and there exist \(\delta , L>0\) such that (2.4) holds. If

$$\begin{aligned} L<\frac{\lambda }{N}, \end{aligned}$$
(2.14)

then Eq. (1.5) has the conditional Lipschitz shadowing property in H.

Proof

The proof proceeds in a similar manner as the proof of Theorem 2. Take \(\kappa >0\) such that

$$\begin{aligned} \frac{N}{\lambda }(L\kappa +1)=\kappa . \end{aligned}$$

Let \(\varepsilon _0>0\), y, S and \({\mathcal {F}}\) be as in the proof of Theorem 2. By arguing as in (2.11) (recall that either \(P(t)\equiv I\) or \(P(t)\equiv 0\)), we have that

$$\begin{aligned} | ({\mathcal {F}} z)(t)| \le \frac{N}{\lambda }(L\kappa +1)\varepsilon =\kappa \varepsilon \end{aligned}$$

for \(t\in [0, \tau )\) and \(z\in S\). Moreover, by similar estimates as in (2.12), we conclude that

$$\begin{aligned} |({\mathcal {F}} z_1)(t)-({\mathcal {F}} z_2)(t)|\le \frac{N}{\lambda }L\Vert z_1-z_2\Vert , \end{aligned}$$

for \(t\in [0, \tau )\) and \(z_1, z_2 \in S\). Now one can complete the proof by the same arguments as in the proof of Theorem 2. \(\square \)

The following consequence of Theorems 2 and 3 gives sufficient conditions under which Eq. (1.5) has the conditional Lipschitz shadowing property in a given neighborhood of the origin.

Corollary 1

Let \(\rho >0\). Suppose that Eq. (1.6) has an exponential dichotomy on \([0,\infty )\) and there exist \(\delta , L>0\) such that

$$\begin{aligned} |f(t,x_1)-f(t,x_2)|\le L|x_1-x_2|\qquad \hbox { for all } t\ge 0 \hbox { and } x_1, x_2\in B_{\rho +\delta }(0). \end{aligned}$$
(2.15)

Then, (2.5) implies that Eq. (1.5) has the conditional Lipschitz shadowing property in \(B_\rho (0)\). Moreover, if instead of the existence of an exponential dichotomy, we assume that Eq. (1.6) has an exponential contraction or exponential expansion on \([0,\infty )\), then the conditional Lipschitz shadowing property of Eq. (1.5) in \(B_\rho (0)\) holds under the weaker condition (2.14).

Proof

Let \(H=B_{\rho }(0)\). Then \({\mathcal {N}}_\delta (H)=B_{\rho +\delta }(0)\) and the conclusion follows from Theorems 2 and 3. \(\square \)

The following theorem gives another reason for the interest in those pseudosolutions of Eq. (1.5) which lie in a given ball around the origin.

Theorem 4

Let \(\rho >0\). Suppose that Eq. (1.6) has an exponential dichotomy on \([0,\infty )\) and there exists \(L>0\) such that

$$\begin{aligned} |f(t,x)|\le L|x|\qquad \hbox { for all } t\ge 0 \hbox { and } x\in B_\rho (0). \end{aligned}$$
(2.16)

Then, (2.5) implies that there exists \(\varepsilon >0\) such that if y is a pseudosolution of Eq. (1.5) on \([0,\tau )\) for some \(\tau \in (0,\infty ]\) with \(\sigma _y\le \varepsilon \), then Eq. (1.5) has a pseudosolution z on \([0,\tau )\) which lies in \(B_\rho (0)\) and has the same error function as y, i.e.

$$\begin{aligned} e_z(t)=e_y(t)\qquad \hbox { for all}\ t\in [0,\tau ). \end{aligned}$$
(2.17)

Proof

In view of (2.5), the equation

$$\begin{aligned} \frac{2N}{\lambda }(L\rho +\varepsilon )=\rho \end{aligned}$$
(2.18)

has a unique solution \(\varepsilon \) given by

$$\begin{aligned} \varepsilon :=\biggl (\frac{\lambda }{2N}-L\biggr )\rho >0. \end{aligned}$$

Suppose that y is a pseudosolution of (1.5) on \([0,\tau )\) for some \(\tau \in (0,\infty ]\) with \(\sigma _y\le \varepsilon \). Let \({\mathcal {C}}:=C([0,\tau ),{\mathbb {R}}^n)\) denote the topological vector space of all continuous functions \(z:[0,\tau ) \rightarrow \mathbb R^n\) equipped with the topology of uniform convergence on compact subsets of \([0,\tau )\). Let

$$\begin{aligned} S:=\biggl \{\,z\in {\mathcal {C}}: \ \hbox { }\ \sup _{t\in [0,\tau )} |z(t)|\le \rho \biggr \}. \end{aligned}$$
(2.19)

Clearly, S is a nonempty, closed and convex subset of \(\mathcal C\). For \(z\in S\) and \(t\in [0,\tau )\), set

$$\begin{aligned} ({\mathcal {F}} z)(t)&:=\int _0^t T(t,s)P(s)\left[ \,f(s,z(s))-h_y(s)\,\right] \,ds\\&\qquad -\int _t^\tau T(t,s)(I-P(s))\left[ \,f(s,z(s))-h_y(s)\,\right] \,ds, \end{aligned}$$

with \(h_y\) as in (2.8). In view of (2.2), (2.3), (2.8), (2.9) and (2.16), we have for \(z\in S\) and \(t\in [0,\tau )\),

$$\begin{aligned} |({\mathcal {F}} z)(t)|&\le \int _0^t |T(t,s)P(s)|(\,L|z(s)|+|h_y(s)|\,)\,ds\\&\qquad +\int _t^\tau |T(t,s)(I-P(s))|(\,L|z(s)|+|h_y(s)|\,)\,ds\\&\le \int _0^t Ne^{-\lambda (t-s)}(L\rho +\sigma _y)\,ds +\int _t^\infty Ne^{-\lambda (s-t)}(L\rho +\sigma _y)\,ds\\&\le \frac{2N}{\lambda }(L\rho +\varepsilon )=\rho , \end{aligned}$$

where the last equality follows from (2.18). Thus, \(\mathcal F z\) is well-defined and \({\mathcal {F}}(S) \subset S\). It follows in a standard manner that \({\mathcal {F}}:S\rightarrow S\) is continuous. In view of (2.19), the functions from the image set \({\mathcal {F}}(S)\subset S\) are uniformly bounded on \([0,\tau )\). Take now an arbitrary compact subinterval \(I\subset [0, \tau )\) and \(z\in S\). It follows by differentiation that

$$\begin{aligned} ({\mathcal {F}} z)'(t)=A(t)({\mathcal {F}} z)(t)+f(t,z(t))-h_y(t), \qquad t\in [0,\tau ). \end{aligned}$$
(2.20)

Hence,

$$\begin{aligned} \sup _{t\in I} |({\mathcal {F}} z)'(t)| \le \rho \max _{t\in I} |A(t)|+L\rho +\varepsilon . \end{aligned}$$

We conclude that the derivatives of the functions in \({\mathcal {F}}(S)\) are uniformly bounded on each compact subinterval of \([0, \tau )\), which implies that the functions in \({\mathcal {F}}(S)\) are equicontinuous on every compact subinterval of \([0,\tau )\). Therefore, the closure of \({\mathcal {F}}(S)\) is compact. By the application of the Schauder–Tychonoff fixed point theorem (see, e.g., [9, Chap. I, p. 9]), we conclude that there exists \(z\in S\) such that \(z={\mathcal {F}} z\). From (2.20), it follows that

$$\begin{aligned} z'(t)=A(t)z(t)+f(t,z(t))-h_y(t), \qquad t\in [0,\tau ). \end{aligned}$$

Hence (see (2.8)), \(h_z=h_y\) identically on \([0,\tau )\), and hence (2.17) holds. Finally, (2.19) shows that z lies in \(B_\rho (0)\). \(\square \)

In the following result, we point out another simple consequence of Theorems 2 and 3.

Corollary 2

Suppose that Eq. (1.6) has an exponential dichotomy on \([0,\infty )\). Assume that f and \(f_x\) are continuous on \([0, \infty )\times {\mathbb {R}}^n\) and there exist \(L_1\ge 0\) and \(L_2>0\) such that

$$\begin{aligned} |f_x(t,x)|\le L_1+L_2|x|\qquad \hbox { for all } t\ge 0 \hbox { and } x\in {\mathbb {R}}^n. \end{aligned}$$
(2.21)

If

$$\begin{aligned} L_1<\frac{\lambda }{2N} \end{aligned}$$
(2.22)

and

$$\begin{aligned} 0<\rho <\frac{1}{L_2}\biggl (\frac{\lambda }{2N}-L_1\biggr ), \end{aligned}$$
(2.23)

then Eq. (1.5) has the conditional Lipschitz shadowing property in \(B_\rho (0)\). Moreover, if instead of the existence of an exponential dichotomy, we assume that Eq. (1.6) has an exponential contraction or exponential expansion on \([0,\infty )\), then the conditional Lipschitz shadowing property of Eq. (1.5) in \(B_\rho (0)\) holds under the weaker conditions

$$\begin{aligned} L_1<\frac{\lambda }{N} \end{aligned}$$
(2.24)

and

$$\begin{aligned} 0<\rho <\frac{1}{L_2}\biggl (\frac{\lambda }{N}-L_1\biggr ). \end{aligned}$$
(2.25)

Proof

We give a proof of the first statement of the corollary. The proof of the second statement is similar and hence it is omitted.

In view of (2.23), we have that

$$\begin{aligned} L_1+L_2\rho <\frac{\lambda }{2N}. \end{aligned}$$

Choose \(\delta >0\) such that

$$\begin{aligned} L_1+L_2(\rho +\delta )<\frac{\lambda }{2N}. \end{aligned}$$

From this and (2.21), we have for all \(t\ge 0\) and \(x\in B_{\rho +\delta }(0)\),

$$\begin{aligned} |f_x(t,x)|\le L_1+L_2|x|\le L_1+L_2(\rho +\delta )<\frac{\lambda }{2N}. \end{aligned}$$

This implies that condition (2.15) of Corollary 1 holds with \(L:=L_1+L_2(\rho +\delta )\). Since L satisfies (2.5), the Lipschitz shadowing property of Eq. (1.5) in \(B_\rho (0)\) follows from the first statement of Corollary 1. \(\square \)

In the following example, we show the importance and the sharpness of the assumptions of Corollary 2.

Example 1

Consider the scalar autonomous equation

$$\begin{aligned} x'=-x-x^2-\frac{1}{4} =-\biggl (x+\frac{1}{2}\biggr )^2, \end{aligned}$$
(2.26)

which is a special case of (1.5) when \(n=1\), \(A(t)=-1\) and \(f(t,x)=-x^2-\frac{1}{4}\) for \(t\in [0, \infty )\) and \(x\in \mathbb R\). Its linear part \(x'=-x\) admits an exponential contraction with \(T(t,s)=e^{-(t-s)}\), \(N=1\) and \(\lambda =1\). Moreover, f satisfies (2.21) with \(L_1=0\) and \(L_2=2\). Therefore, conditions (2.24) and (2.25) of Corollary 2 reduce to \(0<\rho <1/2\). It follows from Corollary 2 that (2.26) has the conditional Lipschitz shadowing in \(B_\rho (0)=[-\rho , \rho ]\) for any \(\rho \in (0, 1/2)\). We will show the importance of the condition \(\rho <1/2\) by proving that the conditional Lipschitz shadowing property for Eq. (2.26) in \(B_{1/2}(0)=[-1/2, 1/2]\) does not hold. Suppose, for the sake of contradiction, that (2.26) has the conditional Lipschitz shadowing property in \(B_{1/2}(0)\) with some constants \(\varepsilon _0, \kappa >0\). Choose

$$\begin{aligned} \delta \in \left( 0,\min \bigl \{1, \varepsilon _0^{1/2}, \kappa ^{-1}\bigr \}\right) \end{aligned}$$
(2.27)

so that

$$\begin{aligned} \varepsilon :=\delta ^2<\varepsilon _0 \qquad \text {and}\qquad \kappa \delta ^2<\delta . \end{aligned}$$
(2.28)

Let y denote the unique solution of the initial value problem

$$\begin{aligned} y'=-\biggl (y+\frac{1}{2}\biggr )^2+\delta ^2,\qquad y(0)=-\frac{1}{2}. \end{aligned}$$
(2.29)

Observe that

$$\begin{aligned} y(t)=-\frac{1}{2}+\frac{1-e^{-2\delta t}}{1+e^{-2\delta t}}\delta \qquad \hbox { for}\ t\ge 0. \end{aligned}$$
(2.30)

Hence,

$$\begin{aligned} \lim _{t\rightarrow \infty }y(t)=-\frac{1}{2}+\delta , \end{aligned}$$
(2.31)

while (2.27), (2.28), (2.29) and (2.30) imply that

$$\begin{aligned} -\frac{1}{2}\le y(t)\le -\frac{1}{2}+\delta <\frac{1}{2}\qquad \text {for all } t\in [0,\infty ), \end{aligned}$$
(2.32)

and

$$\begin{aligned} \sigma _y=\sup _{t\ge 0}\biggl |\,y'(t)+\biggl (y(t)+\frac{1}{2}\biggr )^2\,\biggr |=\delta ^2=\varepsilon . \end{aligned}$$
(2.33)

This shows that \(y:[0, \infty ) \rightarrow {\mathbb {R}}\) is a pseudosolution of (2.26) on \([0,\infty )\) such that \(\sigma _y=\varepsilon <\varepsilon _0\) (see (2.28)) and \(y(t)\in B_{1/2}(0)\) for \(t\ge 0\). Hence, there exists a solution \(x:[0, \infty )\rightarrow {\mathbb {R}}\) of (2.26) such that

$$\begin{aligned} \sup _{t\ge 0}|x(t)-y(t)|\le \kappa \varepsilon =\kappa \delta ^2. \end{aligned}$$
(2.34)

It follows by elementary calculations that if \(c\in {\mathbb {R}}\), then the unique noncontinuable solution x of (2.26) with \(x(0)=c\) is given by

$$\begin{aligned} x(t)=-\frac{1}{2}+\frac{\gamma _c}{\gamma _c t+1}\qquad \hbox { with}\ \gamma _c:=c+\frac{1}{2} \end{aligned}$$

for \(t\in I_c\), where \(I_c=(-\infty ,\infty )\) for \(c=-\frac{1}{2}\), \(I_c=\bigl (-\frac{1}{\gamma _c},\infty \bigr )\) for \(c>-\frac{1}{2}\) and \(I_c=\bigl (-\infty ,-\frac{1}{\gamma _c}\bigr )\) for \(c<-\frac{1}{2}\). Since the solution x satisfying (2.34) is defined on \([0, \infty )\), the last possibility is excluded, and thus \(\lim _{t\rightarrow \infty }x(t)=-\frac{1}{2}\). From this and (2.31), we have that \(\lim _{t\rightarrow \infty }(y(t)-x(t))=\delta \). This, together with (2.28) and (2.34) implies that

$$\begin{aligned} \delta =\lim _{t\rightarrow \infty }|x(t)-y(t)|\le \sup _{t\ge 0}|x(t)-y(t)|\le \kappa \delta ^2<\delta , \end{aligned}$$

which is a contradiction. Thus, (2.26) does not have the conditional Lipschitz shadowing property in \(B_{1/2}(0)\).

Remark 3

It is easy to see that a conclusion similar to the one obtained in Corollary 2 holds true for more general perturbations of Eq. (1.6). For instance, if Eq. (1.6) has an exponential dichotomy on \([0,\infty )\), \(L_1\) satisfies (2.22) and f is such that

$$\begin{aligned} |f_x(t,x)|\le L_1+L_2|x|+L_3|x|^2+\ldots +L_{k+1}|x|^k\qquad \text {for all }t\ge 0 \text { and }x\in {\mathbb {R}}^n, \end{aligned}$$

with \(k\in {\mathbb {N}}\) and \(L_j\ge 0\) for every \(j\in \{1,\ldots , k+1\}\), then for \(\rho >0\) small enough such that

$$\begin{aligned}L_1+L_2\rho +L_3\rho ^2+\ldots +L_{k+1}\rho ^k< \frac{\lambda }{2N},\end{aligned}$$

Eq. (1.5) has the conditional Lipschitz shadowing property in \(B_\rho (0)\).

3 Conditional Lipschitz Shadowing via the Logarithmic Norm

In this section, we give sufficient conditions for the conditional Lipschitz shadowing of Eq. (1.1). These conditions will be formulated in terms of the logarithmic norm \(\mu \) defined by (1.9). Let us recall some useful properties of \(\mu \) from [9, p. 41]. For every \(\alpha \ge 0\) and \(A, B\in {\mathbb {R}}^{n\times n}\), we have

$$\begin{aligned} \mu (\alpha A)&=\alpha \mu (A),\end{aligned}$$
(3.1)
$$\begin{aligned} |\mu (A)|&\le |A|,\end{aligned}$$
(3.2)
$$\begin{aligned} \mu (A+B)&\le \mu (A)+\mu (B),\end{aligned}$$
(3.3)
$$\begin{aligned} |\mu (A)-\mu (B)|&\le |A-B|. \end{aligned}$$
(3.4)

The values of |A| and \(\mu (A)\) for the most commonly used norms

$$\begin{aligned} |x|_\infty =\max _i|x_i|,\qquad |x|_1=\sum _i|x_i|,\qquad |x|_2=\biggl (\sum _i|x_i|^2\biggr )^{1/2}, \end{aligned}$$

in \({\mathbb {R}}^n\) are given by

$$\begin{aligned} |A|_\infty =\max _i\sum _k|a_{ik}|,\qquad |A|_1=\max _k\sum _i|a_{ik}|,\qquad |A|_2=\sqrt{s(A^TA)}, \end{aligned}$$

and

$$\begin{aligned} \mu _\infty (A)=\max _i\biggl (a_{ii}+\sum _{k,\,k\ne i}|a_{ik}|\biggr ),\,\, \mu _1(A)=\max _k\biggl (a_{kk}+\sum _{i,\,i\ne k}|a_{ik}|\biggr ),\,\, \mu _2(A)=s\biggl (\frac{A^T+A}{2}\biggr ), \end{aligned}$$

where \(s(A^TA)\) and \(s((A^T+A)/2)\) are the largest (real) eigenvalue of \(A^TA\) and \((A^T+A)/2\), respectively. We will also need the following auxiliary result.

Lemma 1

For any continuous map \(M:[0,1]\rightarrow {\mathbb {R}}^{n\times n}\), we have that

$$\begin{aligned} \mu \biggl (\int _0^1 M(s)\,ds\biggr )\le \int _0^1\mu \left( M(s)\right) \,ds. \end{aligned}$$
(3.5)

Proof

The existence of the integral on the right-hand side of (3.5) is a consequence of the continuity of \(\mu :{\mathbb {R}}^{n\times n}\rightarrow {\mathbb {R}}\) (see (3.4)). In order to prove (3.5), observe that the subadditivity and positive homogeneity of \(\mu \) (see (3.1) and (3.3)), applied to the integral sums \(\sum _{i=1}^k M(s_i)\Delta s_i\), where \(0=s_0<s_1<\dots <s_k=1\) is a partition of [0, 1] and \(\Delta s_i:=s_i-s_{i-1}\) for \(i=1,\dots ,k\), imply that

$$\begin{aligned} \mu \biggl (\sum _{i=1}^k M(s_i)\Delta s_i\biggr )\le \sum _{i=1}^k\mu \left( M(s_i)\right) \Delta s_i. \end{aligned}$$

Letting \(\Delta :=\max _{1\le i\le k}\Delta s_i\rightarrow 0\) and using the continuity of \(\mu \) again, we conclude that (3.5) holds. \(\square \)

Now we can state and prove the main result of this section which provides a sufficient condition under which Eq. (1.1) has the conditional Lipschitz shadowing property in a given set \(H\subset {\mathbb {R}}^n\).

Theorem 5

Let \(\emptyset \ne H\subset {\mathbb {R}}^n\) and suppose that g and \(g_x\) are continuous on \([0, \infty )\times \mathbb R^n\). If there exist \(\delta >0\) and \(m>0\) such that

$$\begin{aligned} \mu (g_x(t, x))\le -m\qquad \hbox { for all } t\ge 0 \hbox { and } x\in {\mathcal {N}}_\delta (H), \end{aligned}$$
(3.6)

then Eq. (1.1) has the conditional Lipschitz shadowing property in H.

Proof

We will show that Eq. (1.1) has the conditional Lipschitz shadowing property in H with \(\varepsilon _0=m\delta \) and \(\kappa =m^{-1}\). Suppose that \(0<\varepsilon \le \varepsilon _0=m\delta \) and y is a pseudosolution of Eq. (1.1) on \([0,\tau )\) for some \(\tau \in (0,\infty ]\) such that \(\sigma _y=\sup _{0\le t<\tau }|y'(t)-g(t, y(t))|\le \varepsilon \) and \(y(t)\in H\) for all \(t\in [0,\tau )\). Let x be the noncontinuable solution of Eq. (1.1) with initial value \(x(0)=y(0)\). It is known (see, e.g., [9, Chap. I (III), p. 16]) that x is defined on \([0,\sigma )\) for some \(\sigma \in (0,\infty ]\) and \(\sigma =\infty \) whenever x is bounded. Let \(\omega :=\min \{\sigma ,\tau \}\). Define

$$\begin{aligned} z(t):=x(t)-y(t)\qquad \hbox { for}\ t\in [0,\omega ). \end{aligned}$$
(3.7)

We claim that

$$\begin{aligned} |z(t)|<\frac{\varepsilon }{m}\qquad \hbox { for all}\ t\in [0,\omega ). \end{aligned}$$
(3.8)

Suppose, for the sake of contradiction, that (3.8) does not hold. Since \(z(0)=x(0)-y(0)=0\), there exists \(t_1\in (0,\omega )\) such that

$$\begin{aligned} |z(t)|<\frac{\varepsilon }{m}\quad \hbox { for all}\ t\in [0,t_1)\qquad \text {and}\qquad |z(t_1)|=\frac{\varepsilon }{m}. \end{aligned}$$
(3.9)

From (1.1) and (3.7), we find for \(t\in [0,\omega )\),

$$\begin{aligned} z'(t)=g(t, y(t)+z(t))-y'(t)=g(t, y(t)+z(t))-g(t, y(t))+k_y(t), \end{aligned}$$

where

$$\begin{aligned} k_y(t):=g(t, y(t))-y'(t)\qquad \hbox { for}\ t\in [0,\omega ). \end{aligned}$$

Hence

$$\begin{aligned} z'(t)=A(t)z(t)+k_y(t)\qquad \hbox { for}\ t\in [0,\omega ), \end{aligned}$$
(3.10)

where

$$\begin{aligned} A(t):=\int _0^1 g_x(t, y(t)+sz(t))\,ds\qquad \hbox { for}\ t\in [0,\omega ). \end{aligned}$$
(3.11)

From (3.5) and (3.11), we obtain

$$\begin{aligned} \mu \left( A(t)\right) \le \int _0^1 \mu (g_x(t, y(t)+sz(t)))\,ds \qquad \hbox { for}\ t\in [0,\omega ). \end{aligned}$$
(3.12)

Let \(t\in [0,t_1]\) be fixed. In view of (3.9), for every \(s\in [0,1]\), we have

$$\begin{aligned} |\left( y(t)+sz(t)\right) -y(t)|=s|z(t)|\le |z(t)|\le \frac{\varepsilon }{m}\le \frac{\varepsilon _0}{m}=\delta . \end{aligned}$$

Hence, for every \(t\in [0,t_1]\) and \(s\in [0,1]\), we have that \(y(t)\in H\) and \(y(t)+sz(t)\in B_\delta (y(t))\subset {\mathcal {N}}_\delta (H)\). This, together with (3.6) and (3.12), yields

$$\begin{aligned} \mu \left( A(t)\right) \le -m\qquad \hbox { for all}\ t\in [0,t_1]. \end{aligned}$$
(3.13)

Let T(ts) denote the transition matrix of the homogeneous linear differential equation (1.6), where A(t) is given by (3.11). Then, for every \(s\in [0,\omega )\) and \(\xi \in \mathbb R^n\), the solution of Eq. (1.6) with initial value \(\xi \) at \(t=s\) is given by \(x(t)=T(t,s)\xi \) for \(t\in [0,\omega )\). By Coppel’s inequality [9, Chap. III, Theorem 3, p. 58], we have for \(0\le s\le t<\omega \),

$$\begin{aligned} |T(t,s)\xi |\le \exp \biggl (\int _s^t\mu (A(u))\,du\biggr )|\xi |. \end{aligned}$$

Since \(\xi \in {\mathbb {R}}^n\) was arbitrary, this implies that

$$\begin{aligned} |T(t,s)|=\sup _{0\ne \xi \in {\mathbb {R}}^n}\frac{|T(t,s)\xi |}{|\xi |}\le \exp \biggl (\int _s^t\mu (A(u))\,du\biggr ) \quad \hbox { whenever}\ 0\le s\le t<\omega . \nonumber \\ \end{aligned}$$
(3.14)

Since z is a solution of the nonhomogeneous equation (3.10) with initial value \(z(0)=0\), by the variation of constants formula, we have

$$\begin{aligned} z(t)=\int _0^t T(t,s)k_y(s)\,ds\qquad \hbox { for all}\ t\in [0,\omega ). \end{aligned}$$

From this, (3.13) and (3.14), and taking into account that \(\sup _{0\le t<\omega }|k_y(t)|\le \sigma _y\le \varepsilon \), we obtain

$$\begin{aligned} |z(t_1)|&\le \int _0^{t_1} |T(t_1,s)||k_y(s)|\,ds\le \varepsilon \int _0^{t_1} \exp \biggl (\int _s^{t_1}\mu (A(u))\biggr )\,du\\&\le \varepsilon \int _0^{t_1}e^{-m(t_1-s)}\,ds=\frac{\varepsilon }{m}(1-e^{-mt_1})<\frac{\varepsilon }{m}. \end{aligned}$$

This contradicts (3.9) and hence (3.8) holds.

Next we show that \(\sigma \ge \tau \). Otherwise, \(0<\sigma <\tau \) and hence \(\omega =\sigma \). This, together with (3.7) and (3.8), implies that for all \(t\in [0,\sigma )\),

$$\begin{aligned} |x(t)|=|y(t)+z(t)|\le |y(t)|+|z(t)|\le \max _{0\le t\le \sigma }|y(t)|+\frac{\varepsilon }{m}. \end{aligned}$$

Consequently, x is bounded on \([0,\sigma )\) and hence \(\sigma =\infty \) contradicting the fact that \(\sigma <\tau \). Thus, \(\sigma \ge \tau \) and hence \(\omega =\tau \). This, together with (3.7) and (3.8), implies that condition (1.4) is satisfied with \(\kappa =m^{-1}\). The proof of the theorem is completed. \(\square \)

Example 2

(Example 1 revisited) We note that Eq. (2.26) is a special case of (1.1) with

$$\begin{aligned} g(t,x)=-\bigg (x+\frac{1}{2}\bigg )^2, \qquad t\ge 0, \ x\in {\mathbb {R}}. \end{aligned}$$

In Example 1 we have shown that Eq. (2.26) has the conditional Lipschitz shadowing property in \([-\rho , \rho ]\), for every \(0<\rho <\frac{1}{2}\). From Theorem 5, we can deduce a stronger result showing that the interval \([-\rho ,\rho ]\) with \(\rho \in \left( 0,\frac{1}{2}\right) \) can be replaced with the larger interval \([-\rho ,\infty )\). Indeed, as already noted, in the scalar case, we have that \(\mu (A)=A\), and hence condition (3.6) reduces to

$$\begin{aligned} g_x(t,x)\le -m<0\qquad \text {for all} \,t\ge 0 \,\text {and}\ x\in {\mathcal {N}}_\delta (H). \end{aligned}$$
(3.15)

Let \(H:=[-\rho ,\infty )\), where \(0<\rho <\frac{1}{2}\). Choose \(\delta \in (0,\frac{1}{2}-\rho )\). Then, for all \(x\in \mathcal N_\delta (H)=[-\rho -\delta ,\infty )\),

$$\begin{aligned} g_x(t,x)=-2\biggl (x+\frac{1}{2}\biggr )\le -2\biggl (-\rho -\delta +\frac{1}{2}\biggr )<0, \end{aligned}$$

which shows that condition (3.15) is satisfied with \(m:=2(-\rho -\delta +\frac{1}{2})>0\). By the application of Theorem 5, we conclude that, for every \(\rho \in (0,\frac{1}{2})\), Eq. (2.26) has the conditional Lipschitz shadowing property in \([-\rho ,+\infty )\). Since the result obtained in Example 1 implies that the conditional Lipschitz shadowing property for (2.26) in \([-\frac{1}{2},+\infty )\) does not hold, this is the best result which can be achieved.

The following corollary of Theorem 5 for \(H=\mathbb R^n\) provides a new criterion for the standard Lipschitz shadowing property of Eq. (1.1) and hence it is interesting itself.

Corollary 3

Suppose that g and \(g_x\) are continuous on \([0, \infty ) \times {\mathbb {R}}^n\) and there exists \(m>0\) such that

$$\begin{aligned} \mu (g_x(t, x))\le -m\qquad \hbox { for all } t\ge 0 \hbox { and } x\in {\mathbb {R}}^n. \end{aligned}$$

Then, Eq. (1.1) has the Lipschitz shadowing property.

Now we present a simple corollary of Theorem 5 for the autonomous equation

$$\begin{aligned} x'=h(x), \end{aligned}$$
(3.16)

where \(h:{\mathbb {R}}^n\rightarrow {\mathbb {R}}^n\) is continuously differentiable.

Corollary 4

Let H be a nonempty, bounded subset of \({\mathbb {R}}^n\). If \(h:{\mathbb {R}}^n\rightarrow {\mathbb {R}}^n\) is a continuously differentiable function such that

$$\begin{aligned} \sup _{x\in H}\mu (h'( x))<0, \end{aligned}$$
(3.17)

then Eq. (3.16) has the conditional Lipschitz shadowing property in H.

Proof

Equation (3.16) is a special case of (1.1) with \(g(t,x):=h(x)\) for \(t\ge 0\) and \(x\in {\mathbb {R}}^n\). Choose \(\varepsilon \in (0,-k)\), where \(k:=\sup _{x\in H}\mu (h'(x))<0\) (see 3.17). Since H is bounded, there exists \(\rho >0\) such that \(H\subset B_\rho (0)\). The continuity of \(\mu \) and \(h'\) implies that \(\mu \circ h'\) is uniformly continuous on the compact set \(B_{\rho +1}(0)\). Therefore, there exists \(\delta \in (0,1)\) such that

$$\begin{aligned} |\mu (h'(x))-\mu (h'({{\tilde{x}}}))|\le \varepsilon \qquad \hbox {whenever } x,{{\tilde{x}}}\in B_{\rho +1}(0) \,\hbox {and}\,|x-{{\tilde{x}}}|\le \delta . \end{aligned}$$
(3.18)

Let \(x\in {\mathcal {N}}_\delta (H)\). Then, there exists \({{\tilde{x}}}\in H\) such that \(|x-{{\tilde{x}}}|\le \delta \). Hence, \({{\tilde{x}}}\in H\subset B_\rho (0)\) and \(|x|\le |{{\tilde{x}}}|+|x-\tilde{x}|\le \rho +\delta <\rho +1\). Thus, we have that \(x,{{\tilde{x}}}\in B_{\rho +1}(0)\) and \(|x-{{\tilde{x}}}|\le \delta \). From (3.18) and the definition of k, we obtain that

$$\begin{aligned} \mu (g_x(t,x))=\mu (h'(x))\le \mu (h'({{\tilde{x}}}))+\varepsilon \le k+\varepsilon . \end{aligned}$$

Since \(x\in {\mathcal {N}}_\delta (H)\) was arbitrary, condition (3.6) is satisfied with \(m=-(k+\varepsilon )>0\). The desired conclusion now follows readily from Theorem 5. \(\square \)

Finally, we illustrate the importance of assumption (3.17) of Corollary 4 in a special case of a classic model from epidemiology.

Example 3

Consider the system

$$\begin{aligned} \begin{aligned} S'&= 1-IS-S, \\ I'&= IS-I, \end{aligned} \end{aligned}$$
(3.19)

which is a special case of the modified Kermack–McKendrick equation (see [11, Chap. 2, Sec. 2.3, p. 53]). Biologically meaningful solutions are generated by initial data (S(0), I(0)) from the set

$$\begin{aligned} \Gamma := \left\{ (S,I) \in {\mathbb {R}}^2:S\ge 0,\,I\ge 0,\,S+I \le 1 \right\} . \end{aligned}$$

For \(c\in [0,1)\), define

$$\begin{aligned} \Gamma _c:= \left\{ (S,I) \in {\mathbb {R}}^2:S\ge 0,\,I\ge 0,\,S+I \le 1-c\right\} . \end{aligned}$$

Observe that \(\Gamma _0=\Gamma \) and \(\Gamma _c\subset \Gamma _0\) for \(c\in [0,1)\). Eq. (3.19) is a special case of Eq. (3.16), where \(h:{\mathbb {R}}^2\rightarrow {\mathbb {R}}^2\) is given by

$$\begin{aligned} h(S,I)=(1-IS-S,IS-I)^T\qquad \hbox { for}\ (S,I)^T\in {\mathbb {R}}^2. \end{aligned}$$

We will show that, for every \(c\in (0,1)\), Eq. (3.19) has the conditional Lipschitz shadowing property in \(\Gamma _c\), but the same property in \(\Gamma _0\) does not hold. Evidently, h is continuously differentiable and

$$\begin{aligned} h'(S,I)= \begin{pmatrix} -I-1 &{} -S \\ I &{} S-1 \\ \end{pmatrix} \qquad \hbox { for}\ (S,I)^T\in {\mathbb {R}}^2. \end{aligned}$$

Hence,

$$\begin{aligned} \mu _\infty (h'(S,I))=\max \{\,-I-1+|S|,S-1+|I|\,\} \qquad \hbox { for}\ (S,I)^T\in {\mathbb {R}}^2. \end{aligned}$$

From this and the definition of \(\Gamma _c\), we obtain

$$\begin{aligned} \mu _\infty (h'(S,I))=S-1+I\le -c\qquad \hbox { for all}\ (S,I)^T\in \Gamma _c, \end{aligned}$$
(3.20)

where \(c\in [0,1)\) is arbitrary. This shows that if \(c\in (0,1)\), then condition (3.17) of Corollary 4 is satisfied with \(\mu =\mu _\infty \) and \(H=\Gamma _c\). By the application of Corollary 4, we conclude that, for every \(c\in (0,1)\), Eq. (3.19) has the conditional Lipschitz shadowing property in \(\Gamma _c\). Next we show that the same property in \(\Gamma _0\) does not hold. Suppose, for the sake of contradiction, that Eq. (3.19) has the conditional Lipschitz shadowing property in \(\Gamma _0\). Note that the definition of the conditional Lipschitz shadowing is independent of the norm used since all norms on \({\mathbb {R}}^n\) are equivalent. Therefore, we may (and do) use the infinity norm \(|\cdot |_\infty \) on \({\mathbb {R}}^2\). Let \(\varepsilon _0, \kappa >0\) be the constants from the definition of the conditional Lipschitz shadowing property in \(\Gamma _0\). It is easily verified that for every \(\varepsilon >0\),

$$\begin{aligned} P_\varepsilon :=(1-\sqrt{\varepsilon },\sqrt{\varepsilon })^T\in {\mathbb {R}}^2 \end{aligned}$$

is an equilibrium of the system

$$\begin{aligned} \begin{aligned} S'&= 1-IS-S-\varepsilon , \\ I'&= IS - I+\varepsilon , \end{aligned} \end{aligned}$$

which is a perturbation of Eq. (3.19). Therefore, for every \(\varepsilon \in (0,\min \{\varepsilon _0,1\})\), \(P_\varepsilon =(1-\sqrt{\varepsilon },\sqrt{\varepsilon })^T\) is a constant pseudosolution of (3.19) on \([0,\infty )\) with maximum error \(\sigma _{P_\varepsilon }=\varepsilon \le \varepsilon _0\) and such that \(P_\varepsilon \in \Gamma _0\). By the definition of the conditional Lipschitz shadowing, this implies that, for every \(\varepsilon \in (0,\min \{\varepsilon _0,1\})\), Eq. (3.19) has a solution \((S_\varepsilon (t),I_\varepsilon (t))^T\) on \([0,\infty )\) such that

$$\begin{aligned} (S_{\varepsilon }(t),I_\varepsilon (t))^T\in B_{\kappa \varepsilon }(P_\varepsilon )=B_{\kappa \varepsilon }((1-\sqrt{\varepsilon }, \sqrt{\varepsilon })^T)\qquad \text {for all }t\in [0,\infty ). \end{aligned}$$
(3.21)

In particular, we have that

$$\begin{aligned} \emptyset \ne \omega (S_\varepsilon ,I_\varepsilon )\subset B_{\kappa \varepsilon }(P_{\varepsilon })\qquad \text { whenever } \varepsilon >0\text { is sufficiently small }, \end{aligned}$$
(3.22)

where \(\omega (S_\varepsilon ,I_\varepsilon )\) denotes the omega-limit set of the solution \((S_\varepsilon (t),I_\varepsilon (t))^T\). If \(\varepsilon >0\) is sufficiently small, then \(\kappa \varepsilon <\sqrt{\varepsilon }\). Hence,

$$\begin{aligned} B_{\kappa \varepsilon }(P_{\varepsilon })\subset G:=(0,1)\times (0,1)\qquad \text {whenever } \varepsilon >0 \text { is sufficiently small. } \end{aligned}$$
(3.23)

Choose \(\varepsilon >0\) small enough such that both (3.21) and (3.23) are satisfied. Define \(V:\mathbb R^2\rightarrow {\mathbb {R}}\) by

$$\begin{aligned} V(S,I)=I\qquad \hbox { for}\ (S,I)^T\in {\mathbb {R}}^2. \end{aligned}$$

Then \(V'(S,I)=(0,1)\) and for the derivative of V along system (3.19), we have

$$\begin{aligned} \dot{V}_{(3.19)}(S,I)=V'(S,I)h(S,I)=-I(1-S)\le 0\qquad \hbox { for}\ (S,I)^T\in {\overline{G}}=[0,1]\times [0,1]. \end{aligned}$$

Thus, V is a Lyapunov function for Eq. (3.19) on \({\overline{G}}\) (see [10, Chap. 2, Definition 6.1, p. 30]) and

$$\begin{aligned} E:=\{\,(S,I)^T\in {\overline{G}}\mid \dot{V}_{(3.19)}(S,I)=0\,\}= \{\,(S,I)^T\in {\overline{G}}\mid \hbox {} I=0 \,\hbox {or}\, S=1\,\}. \end{aligned}$$
(3.24)

By the application of LaSalle’s invariance principle (see, e.g., [10, Chap. 2, Theorem 6.1, p. 30]), we conclude that \(\omega (S_\varepsilon ,I_\varepsilon )\subset E\). This, combined with (3.22), yields

$$\begin{aligned} \emptyset \ne \omega (S_\varepsilon ,I_\varepsilon )\subset B_{\kappa \varepsilon }(P_{\varepsilon })\cap E. \end{aligned}$$

Thus, \(B_{\kappa \varepsilon }(P_{\varepsilon })\cap E\ne \emptyset \). On the other hand, (3.23) and (3.24) imply that \(B_{\kappa \varepsilon }(P_{\varepsilon })\cap E=\emptyset \). This contradiction proves that Eq. (3.19) does not have the conditional shadowing property in \(\Gamma _0\). Note that if we take \(H=\Gamma _0\) and \(\mu =\mu _\infty \), then (3.20) with \(c=0\) implies that

$$\begin{aligned} \sup _{(S,I)^T\in \Gamma _0}\mu _\infty (h'(S,I))=0, \end{aligned}$$

which shows the importance and the sharpness of condition (3.17) in Corollary 4.