1 Introduction

Let \({\mathcal {H}}\) be a real Hilbert space with inner product \(\langle .,.\rangle \) and associated norm \(\left\| . \right\| \). Throughout this paper, C is a nonempty closed and convex subset of \({\mathcal {H}}\), \(f:C\rightarrow C\) is a contraction mapping with coefficient \(\alpha \in [0,1[\) i.e.,

$$\begin{aligned} \left\| f(x)-f(y)\right\| \le \alpha \left\| x-y\right\| \forall x,y \in C, \end{aligned}$$

and \(T:C\rightarrow C\) is a nonexpansive mapping, i.e.,

$$\begin{aligned} \left\| T(x)-T(y)\right\| \le \left\| x-y\right\| \forall x,y\in C. \end{aligned}$$

Browder [6] and Kirk [15] independently proved in 1965 that if in addition the set C is bounded in \({\mathcal {H}}\) then T has at least one fixed point. In this work, we assume always that the set \(\text {Fix}(T)=\{x\in C:T(x)=x\}\) of fixed points of T is nonempty.

The investigation of wide variety of convex optimization problems and variational inequalities or variational inclusions problems in the Hilbert spaces or the Banach spaces settings leads to the study of fixed points of some appropriate nonexpansive mappings (see for instance [9, 10, 12, 22, 26], and references therein). Hence, it is of interest to construct numerical efficient algorithms, continuous or discrete, that approximate such fixed points. Let us first notice that unlike the case T is a contraction the classical iterative process \(x_{n+1}=T(x_{n})\) may not converge to a fixed point of T. To see this, it suffices to consider for instance the example where the initializing data is different from 0 and \(T=-I\) where I stands for the identity operator of \({\mathcal {H}}\). However, for any initial data \(x_0\in C\), the solution x(.) of associated continuous dynamical system

$$\begin{aligned} \left\{ \begin{array}{l} x^{\prime }(t)+x(t)=T(x(t))\\ x(0)=x_{0}, \end{array} \right. \end{aligned}$$
(1.1)

converges weakly in \({\mathcal {H}}\) as \(t\rightarrow +\infty \) to a fixed point of T. For the proof, we refer the to [7] where a more general result about the asymptotic behavior of solutions to the inclusions differential equations \(0\in x^{\prime }(t)+A(x(t))\) has been proved where A is a demipositive operator. We notice the interesting fact that the explicit Euler discretization with variable step size \(h_n=\theta _n\) of the dynamical system (1.1) yields the well-known Krasnoselskii–Mann algorithm (see [19] and [16]):

$$\begin{aligned} x_{n+1}=\theta _n x_n+(1-\theta _n)T(x_n), \end{aligned}$$
(1.2)

which, for any initializing data \(x_1\in C\), generates a sequence \((x_n)\) that converges weakly in \({\mathcal {H}}\) to a fixed point of T provided the sequence \((\theta _n)\) belongs to (0, 1] and satisfies the condition

(C\(_0\)):

\(\sum _{n=0}^{\infty }(1-\theta _n)\theta _{n}=\infty \).

(For a proof of this result, see for instance [13]).

In a seminal paper [14], Halpern introduced an algorithm that converges strongly to a particular and well-defined fixed point of a nonexpansive mapping. Precisely, he considered the particular case when C is the closed unit ball of \({\mathcal {H}}\) and established that, for every \(x_1\in C\), the sequence defined recursively by \(x_{n+1}=(1-\theta _n)T(x_n)\) converges strongly to the closest element of the set \(\text {Fix}(T)\) to the origin provided that \( \theta _n=\frac{1}{n^\theta }\) with \(0<\theta <1\). Later in 1977, Lions [18] generalized and improved Halpern’s convergence result. In fact, he proved that for every anchor point \(u\in C\) and any initializing data \(x_1\in C\), the sequence \((x_n)\) generated by the process

$$\begin{aligned} x_{n+1}=\theta _n u+(1-\theta _n)T(x_n) \end{aligned}$$

converges strongly to the closest element \(u^\star \) of the set \(\text {Fix}(T)\) to u provided the sequence \((\theta _n)\) belongs to (0, 1] and satisfies the conditions:

(C\(_{1}\)):

\(\theta _{n}\rightarrow 0\) as \(n\rightarrow \infty ,\)

(C\(_{2}\)):

\(\sum _{n=1}^{\infty }\theta _{n}=\infty ,\)

(C\(_{3}\)):

\(\frac{\left| \theta _{n+1}-\theta _{n}\right| }{\theta _{n}^2}\rightarrow 0\) as \(n\rightarrow \infty .\)

In 2000, Moudafi [20] introduced the so called viscosity approximation method. Precisely, he considered the iterative process

$$\begin{aligned} x_{n+1}=\theta _{n}f(x_{n})+(1-\theta _{n})T(x_{n}), \end{aligned}$$
(DDS)

and proved that if \((\theta _{n})_{n}\) satisfies the conditions (C\(_{1}\)), (C\(_{2}\)) and

(C\(_{4}\)):

\(\frac{\left| \theta _{n+1}-\theta _{n}\right| }{\theta _{n}\theta _{n+1}}\rightarrow 0\) as \(n\rightarrow \infty ,\)

then, for any \(x_{1}\) in C, the sequence \((x_{n})\) generated by the process (DDS) converges strongly in \({\mathcal {H}}\) to the unique solution \(q^{*}\) of the variational inequality problem

$$\begin{aligned} \left\{ \begin{array}{l} q^{*}\in \text {Fix}(T)\\ \langle f(q^{*})-q^{*},z-q^{*}\rangle \le 0,\forall z\in \text {Fix}(T). \end{array} \right. \end{aligned}$$
(VIP)

Later in 2004, Xu [25] improved Moudafi’s convergence result by replacing the condition (C\(_{4}\)) by the weaker one:

(C\(_{5}\)):

either \(\frac{\left| \theta _{n+1}-\theta _{n}\right| }{\theta _{n}}\rightarrow 0\) as \(n\rightarrow \infty \) or \(\sum _{n=1}^{\infty }\left| \theta _{n+1}-\theta _{n}\right| <\infty .\)

In our present work, inspired by the seminal paper [24] and the series of works of Attouch et al. (see for instance [1,2,3,4]), we consider the continuous dynamical system

$$\begin{aligned} \left\{ \begin{array}{l} x^{\prime }(t)+x(t)=\theta (t)f(x(t))+(1-\theta (t))T(x(t)\\ x(0)=x_{0}, \end{array} \right. \end{aligned}$$
(CDS)

associated with the discrete algorithm (DDS), where \(\theta :[0,\infty )\rightarrow (0,1]\) is a regular function. We prove that if the function \(\theta (.)\) satisfies the continuous version of the conditions (C\(_{1}\)), (C\(_{2}\)) and (C\(_{5}\)) then, for any initial data \(x_{0}\in C\), the system (CDS) has a unique global solution \(x\in C^{1}([0,\infty ),{\mathcal {H}})\) which converges strongly in \({\mathcal {H}}\) as \(t\rightarrow +\infty \) to the unique solution \(q^{*}\) of the problem (VIP). Moreover, in the particular case \(\theta (t)=\frac{K}{(1+t)^{\nu }}\) with \(K>0\) and \(\nu \in (0,1],\) we establish an estimate on the rate of convergence of \(\left\| T(x(t))-x(t)\right\| \) to 0 as \(t\rightarrow +\infty .\) Such result can be considered as the continuous version of a recently result established by Lieder [17] on the rate of convergence of the sequence \((x_{n} -T(x_{n}))_{n}\) for the process (DDS) in the case the function f is constant and the sequence \((\theta _{n})\) is given by \(\theta _{n}=\frac{1}{n+2}.\) We notice at the end of this introduction, that recently Bot et al. [8] have introduced the following dynamical system

$$\begin{aligned} \left\{ \begin{array}{l} x^{\prime }(t)=\lambda (t)[T(x(t))-x(t)]-\varepsilon (t)x(t)\\ x(0)=x_{0}, \end{array} \right. \end{aligned}$$
(1.3)

where \(\lambda (.)\) and \(\varepsilon (.)\) are two parameter real nonnegative functions defined on \( [0,\infty )\). The authors proved, under suitable assumptions on the parameter functions, the strong convergence of the solutions x(t) as \(t\rightarrow \infty \) to the fixed point of T with minimal norm. As we will see later in Theorem 3.3, this strong convergence result holds also for the trajectories of the system (CDS) in the particular case the function f is identically equal to 0 if the parameter function \(\theta (.)\) satisfies the continuous version of the conditions (C\(_{1}\)), (C\(_{2}\)) and (C\(_{5}\)).

The rest of the paper is organized as follows. In the next section, we recall some classical notions and results from functional and convex analysis that are useful in the sequel of the paper. In the third section, we study the strong convergence as \(t\rightarrow \infty \) of the solutions x(t) of the system (CDS) under the continuous version of the discrete conditions (C\(_{1}\)), (C\(_{2}\)) and (C\(_{5}\)) on the parameter function \(\theta (.)\). Then we investigate the continuity of the trajectories x(.) with respect to the initial data and their stability under relatively small error term e(t). In the final section, we first establish a general result that implies a precise estimation on the rate of the convergence of the residual term \(x(t)-T(x(t))\) to 0 in the particular case \(\theta (t)=\frac{K}{(1+t)^{\nu }}\) with \(K>0\) and \(0<\nu \le 1\). Then we deduce an estimate on the speed of convergence of x(t) at infinity to \(q^*\) in the particular case where T is a contraction mapping. Finally we show the precision and in some cases the optimality of the obtained results through the study of some examples of the system (CDS) and its perturbed version.

2 Preliminaries

In this section, we recall some classical definitions and results from convex and functional analysis and prove some simple lemmas that will be needed for the proof of the main results of this paper.

We first recall the definition and the main properties of the metric projection.

Lemma 2.1

([21, Proposition 1.37]) Let K be a nonempty, closed and convex subset of \({\mathcal {H}}\). Then the following assertions hold:

  1. (1)

    For every \( x\in {\mathcal {H}},\) there exists a unique \(P_{K}(x)\in K\) such that

    $$\begin{aligned} \left\| x-P_{K}(x)\right\| \le \left\| x-y\right\| \ ~\forall y\in K. \end{aligned}$$

    The operator \(P_{K}:{\mathcal {H}}\rightarrow K\) is called the metric projection onto K.

  2. (2)

    For every \(x\in {\mathcal {H}}\), \(P_{K}(x)\) is the unique element of K satisfying

    $$\begin{aligned} \langle P_{K}(x)-x,P_{K}(x)-y\rangle \le 0, \ ~\forall y\in K. \end{aligned}$$
  3. (3)

    The operator \(P_{K}:{\mathcal {H}}\rightarrow K\) is nonexpansive i.e.,

    $$\begin{aligned} \left\| P_{K}(x)-P_{K}(y)\right\| \le \left\| x-y\right\| ,\ ~\forall x,y\in {\mathcal {H}}. \end{aligned}$$

The second result is a classical property of the set of fixed points of nonexpansive mappings

Lemma 2.2

([5, Proposition 4.13]) Let K be a closed convex and nonempty subset of \({\mathcal {H}}\). If \(A:K\rightarrow K\) is a nonexpansive mapping then \(\text {Fix}(A)=\{x\in K:A(x)=x\}\) is a closed and convex subset of \({\mathcal {H}}\).

The next result is a particular case of the general demi-closedness property for nonexpansive mappings

Lemma 2.3

([5, Corollary 4.18]) Let K be a closed convex and nonempty subset of \({\mathcal {H}}\), \(A:K\rightarrow K\) a nonexpansive mapping, and \((x_n)\) a sequence in K. If \((x_n)\) converges weakly in \({\mathcal {H}}\) to some element \({\bar{x}}\) and \((x_{n}-A(x_{n}))\) converges strongly to 0 in \({\mathcal {H}}\), then \({\bar{x}}\in \text {Fix}(A)\).

We now prove a variant of Gronwall’s inequality that will be used frequently in the sequel.

Lemma 2.4

Let \(u,v,w:[0,\infty )\rightarrow [0,\infty )\) be three continuous functions. If the function u is absolutely continuous and satisfies the differential inequality

$$\begin{aligned} u^{\prime }(t)+2v(t)u(t)\le 2w(t)\sqrt{u(t)}. \end{aligned}$$

Then, for every \(t\ge 0,\)

$$\begin{aligned} \sqrt{u(t)}\le e^{-V(t)}\sqrt{u(0)}+e^{-V(t)}\int _{0}^{t}e^{V(s)}w(s)ds, \end{aligned}$$
(2.1)

where \(V(t)=\int _{0}^{t}v(\tau )d\tau .\)

Proof

Let \(\varepsilon >0.\) It is clear that the function \(u_{\varepsilon }\) defined on \([0,\infty )\) by \(u_{\varepsilon }(t)=\sqrt{u(t)+\varepsilon }\) is absolutely continuous and satisfies the estimations

$$\begin{aligned} u_{\varepsilon }^{\prime }(t)&=\frac{u^{\prime }(t)}{2\sqrt{u(t)+\varepsilon }}\\&\le -v(t)\frac{u(t)}{\sqrt{u(t)+\varepsilon }}+w(t)\frac{\sqrt{u(t)}}{\sqrt{u(t)+\varepsilon }}\\&\le -v(t)u_{\varepsilon }(t)+v(t)\frac{\varepsilon }{\sqrt{u(t)+\varepsilon } }+w(t)\\&\le -v(t)u_{\varepsilon }(t)+w(t)+v(t)\sqrt{\varepsilon }. \end{aligned}$$

Therefore, for almost every \(t\ge 0,\)

$$\begin{aligned} \left( e^{V(t)}u_{\varepsilon }(t)\right) ^{\prime }\le e^{V(t)} w(t)+\sqrt{\varepsilon }\left( e^{V(t)}\right) ^{\prime }. \end{aligned}$$

Integrating the later differential inequality, we obtain

$$\begin{aligned} \sqrt{u(t)+\varepsilon }\le e^{-V(t)}\sqrt{u(0)+\varepsilon }+e^{-V(t)}\int _{0}^{t}e^{V(s)}w(s)ds+\sqrt{\varepsilon }(1-e^{-V(t)}),~\forall t\ge 0. \end{aligned}$$

Hence, by letting \(\varepsilon \rightarrow 0,\) we get the required inequality (2.1). \(\square \)

We now state and prove a simple lemma that will be the key in studying the asymptotic behavior of the solution x(t) of (CDS) in the case where \(\theta =\frac{K}{(1+t)^\nu }\) with \(K>0\) and \(\nu \in (0,1)\). Before stating this lemma, let us first recall the following notation that will be also used frequently in the final section of this paper.

Notation 2.5

Let \(u,v:[0,\infty )\rightarrow (0,\infty )\) be two functions. Then:

  1. (1)

    \( u(t)=O(v(t))\) means there exists a constant \( M>0\) such that \(u(t)\le M v(t)\forall t\ge 0.\)

  2. (2)

    \( u(t)=o(v(t))\) as \( t\rightarrow \infty \) means that \(\frac{u(t)}{v(t)} \rightarrow 0\) as \( t\rightarrow \infty .\)

  3. (3)

    \( u(t)\sim v(t)\) as \( t\rightarrow \infty \) means that \(\frac{u(t)}{v(t)} \rightarrow 1\) as \( t\rightarrow \infty .\)

Lemma 2.6

Let \(\sigma ,u:[0,\infty )\rightarrow (0,\infty )\) be two continuously differentiable functions such that

$$\begin{aligned} \int _{0}^{\infty }e^{\Gamma (t)}u(t)=\infty , \end{aligned}$$

where \(\Gamma (t)=\int _{0}^{t}\sigma (s)ds\). If \(( \frac{u}{\sigma })^{\prime }(t)=o (u(t))\) as \(t\rightarrow \infty \) then

$$\begin{aligned} \int _{0}^{t}e^{\Gamma (s)}u(s)ds\sim \frac{e^{\Gamma (t)}u(t)}{\sigma (t)} \text { as }t\rightarrow \infty . \end{aligned}$$

Proof

Let v and w be the two functions defined on \([0,\infty )\) respectively by \(v(t)=\int _{0}^{t}e^{\Gamma (s)}u(s)ds\) and \(w(t)=\frac{e^{\Gamma (t)}u(t) }{\sigma (t)}.\) For every \(t\ge 0,\) we have

$$\begin{aligned} \frac{v^{\prime }(t)}{w^{\prime }(t)}=\frac{e^{\Gamma (t)}u(t)}{e^{\Gamma (t)}\left( u(t)+(\frac{u}{\sigma })^{\prime }(t)\right) }=\frac{1}{1+\frac{( \frac{u}{\sigma })^{\prime }(t)}{u(t)}}, \end{aligned}$$

which clearly gives \(v^{\prime }(t)\sim w^{\prime }(t)\) as \( t\rightarrow \infty .\) This result combined with the positivity of the function v and the fact that \(\lim _{t\rightarrow \infty }v(t)=\infty \ \) implies that \(\lim _{t\rightarrow \infty }w(t)=\infty \). Hence, L’Hospital’s rule yields the claimed result, in fact:

$$\begin{aligned} \lim _{t\rightarrow \infty }\frac{v(t)}{w(t)}=\lim _{t\rightarrow \infty } \frac{v^{\prime }(t)}{w^{\prime }(t)}=1. \end{aligned}$$

\(\square \)

We close this section by proving the following simple result that will be needed to prove the existence of the solutions x(.) of the dynamical system (CDS).

Lemma 2.7

Let K be a nonempty closed and convex subset of the Hilbert space \( {\mathcal {H}}\). Let \(a<b\) be two real numbers, \(w:[a,b]\rightarrow [0,\infty )\) a continuous function such that \(\int _{a}^{b}w(t)dt>0\), and \( v:[a,b]\rightarrow {\mathcal {H}}\) a continuous function such that \(v(t)\in K\) for every \(t\in [a,b].\) Then

$$\begin{aligned} J:=\frac{1}{\int _{a}^{b}w(t)dt}\int _{a}^{b}v(s)w(s)ds\in K. \end{aligned}$$

Proof

For every \(N\in {\mathbb {N}},\) set \(J_{N}=\sum _{k=0}^{N-1}v(x_{k,N})\alpha _{k,N}\) with \(x_{k,N}=a+k\frac{b-a}{N}\) and \(\alpha _{k,N}=\frac{1}{ \int _{a}^{b}w(t)dt}\int _{x_{K,N}}^{x_{k+1,N}}w(s)ds.\) From the convexity of the set K,  it follows that \(J_{N}\in K\) for every \(N\in {\mathbb {N}}\). On the other hand, the uniform continuity of the function v on the compact interval [ab] implies directly that the sequence \((I_{N})_{N}\) converges strongly in \({\mathcal {H}}\) to J. Hence, using the fact that K is a closed subset of \({\mathcal {H}}\), we conclude that \(J\in K.\) \(\square \)

3 Strong convergence and stability of the trajectories of the dynamical system (CDS)

In the first part of this section, we mainly study the asymptotic behavior of the solution of the dynamical system

$$\begin{aligned} \left\{ \begin{array}{l} x^{\prime }(t)+x(t)=\theta (t)f(x(t))+(1-\theta (t))T(x(t))\\ x(0)=x_{0}, \end{array} \right. \end{aligned}$$
(CDS)

where \(\theta :[0,\infty ) \rightarrow [0,1]\) is an absolutely continuous function and \(x_0\in C\) is a given initial data.

Before stating our main result, let us first precise the notion of a trajectory of the system (CDS).

Definition 3.1

A trajectory of the system (CDS) is a continuously differentiable function \(x:[0,\infty )\rightarrow {\mathcal {H}}\) that satisfies the following properties:

  1. (1)

    \(x(t)\in C\) for every \(t\ge 0,\)

  2. (2)

    \(x(0)=x_{0},\)

  3. (3)

    \(x^{\prime }(t)+x(t)=\theta (t)f(x(t))+(1-\theta (t))T(x(t))\) for every \(t\ge 0.\)

We will also need the following notion in the study the stability of the trajectories of (CDS) at the second part of the this section.

Definition 3.2

Let \(e:[0,\infty )\rightarrow {\mathcal {H}}\) be a continuous function. A trajectory of a perturbed version of the system (CDS) with error term e(.) is a continuously differentiable function \(y:[0,\infty )\rightarrow {\mathcal {H}}\) that satisfies the following properties:

  1. (1)

    \(y(t)\in C\) for every \(t\ge 0,\)

  2. (2)

    \(y^{\prime }(t)+x(t)=\theta (t)f(x(t))+(1-\theta (t))T(x(t))+e(t)\) for every \(t\ge 0.\)

We now state and prove the first main result of the paper.

Theorem 3.3

The system (CDS) has a unique trajectory x(.). Moreover, if in addition the function \(\theta (.)\) satisfies the following conditions:

(C’\(_{1}\)):

\(\theta (t)\rightarrow 0\) as \(t\rightarrow \infty ,\)

(C’\(_{2}\)):

\(\int _{0}^{+\infty }\theta (t)dt=\infty ,\)

(C’\(_{5}\)):

either \(\int _{0}^{+\infty }\left| \theta ^{\prime }(t)\right| dt<\infty \) or \(\frac{\theta ^{\prime }(t)}{\theta (t)}\rightarrow 0\) as \(t\rightarrow \infty ,\)

then x(t) converges strongly in \({\mathcal {H}}\) as \(t\rightarrow \infty \) to \(q^{*}\) the unique solution of the variational inequality problem

$$\begin{aligned} \left\{ \begin{array}{l} q^{*}\in \text {Fix}(T)\\ \langle f(q^{*})-q^{*},z-q^{*}\rangle \le 0,\forall z\in \text {Fix}(T). \end{array} \right. \end{aligned}$$
(VIP)

Proof

Let us first recall, for the convenience of the readers, the classical proof of the existence and uniqueness of the solution \(q^{*}\) of the problem (VIP). First, from Lemma 2.2, \(\text {Fix}(T)\) is a closed and convex nonempty subset of \({\mathcal {H}}\), thus the projection operator \(P_{\text {Fix}}\) is well-defined. Moreover, from the second assertion of Lemma 2.1, the problem (VIP) is equivalent to the fact that \(q^{*}\) is a fixed point of the mapping \(P_{\text {Fix}(T)}\circ f:\text {Fix}(T)\rightarrow \text {Fix}(T).\) Now, since \(P_{\text {Fix}(T)}\) is nonexpansive, the mapping \(P_{\text {Fix}}\circ f\) is a contraction with the same coefficient \(\alpha \) as the function f and therefore, according to the classical theorem of Banach, it has a unique fixed point. This proves the existence and the uniqueness of the solution \(q^{*}\) of (VIP).

We divide the rest of the proof into many steps.

The first step: We prove here the existence and the uniqueness of the trajectory x(.) of the system (CDS). To do this, we consider the Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{l} x^{\prime }(t)+x(t)=g(t,x(t)) \\ x(0)=x_{0}, \end{array} \right. \end{aligned}$$
(3.1)

where \(g:[0,\infty )\times {\mathcal {H}}\rightarrow {\mathcal {H}}\) is the mapping defined by

$$\begin{aligned} g(t,x)=\theta (t)f(P_{C}(x))+(1-\theta (t))T(P_{C}(x)). \end{aligned}$$

It is clear that the application g is continuous and satisfies, for every \(t\ge 0\) and \(x_{1},x_{2}\in {\mathcal {H}},\) the estimation

$$\begin{aligned} \left\| g(t,x_{1})-g(t,x_{2})\right\| \le (1-\gamma \theta (t))\left\| x_{1}-x_{2}\right\| , \end{aligned}$$
(3.2)

with \(\gamma =1-\alpha .\) Hence, according to the classical theorem of Cauchy–Lipschitz, the system (3.1) has a unique global solution \(x\in C^{1} ([0,\infty ),{\mathcal {H}}).\) Let \(t>0\) be a fixed real number. From (3.1),

$$\begin{aligned} x(t)&=e^{-t}x_{0}+e^{-t}\int _{0}^{t}e^{s}g(s,x(s))ds\\&=e^{-t}x_{0}+(1-e^{-t})\int _{0}^{t}g(s,x(s))w_t(s)d(s) \end{aligned}$$

where \(w_t(s)=1_{[0,t]}(s)\frac{e^{s}}{1-e^{-t}}\). Since, for every \(s\in [0,t],g(s,x(s))\in C\), Lemma 2.7 ensures that \(\int _{0}^{t}g(s,x(s))w_t(s)d(s)\in C\), which in turn implies, thanks again to the convexity of C,  that \(x(t)\in C.\) This proves that x(.) is a trajectory of the system (CDS) in the sense of the definition 3.1. The uniqueness of the the trajectory of (CDS) follows from the uniqueness of the solution of the Cauchy problem (3.1) and the trivial fact that every trajectory of (CDS) is also a solution to (3.1).

The second step: We prove here that the trajectory x(.) is bounded i.e., \(x\in L^{\infty }([0,\infty ),{\mathcal {H}}).\) To this end, we consider the function u defined on \([0,\infty )\) by \(u(t)=\left\| x(t)-q^{*}\right\| ^{2}.\) Using the fact that x(.) is a solution of (3.1) and the estimation (3.2), we easily obtain the following estimations:

$$\begin{aligned} u(t)&=2\langle x^{\prime }(t),x(t)-q^{*}\rangle \nonumber \\&=2\langle -x(t)+g(t,x(t)),x(t)-q^{*}\rangle \nonumber \\&=-2u(t)+2\langle g(t,q^{*})-q^{*},x(t)-q^{*}\rangle +2\langle g(t,x(t))-g(t,q^{*}),x(t)-q^{*}\rangle \nonumber \\&=-2u(t)+2\theta (t)\langle f(q^{*})-q^{*},x(t)-q^{*}\rangle +2\langle g(t,x(t))-g(t,q^{*}),x(t)-q^{*}\rangle \nonumber \\&\le -2u(t)+2\theta (t)\langle f(q^{*})-q^{*},x(t)-q^{*} \rangle +2(1-\gamma \theta (t))\left\| x(t)-q^{*}\right\| ^{2}\\&\le -2\gamma \theta (t)u(t)+2\theta (t)\left\| f(q^{*})-q^{*}\right\| \left\| x(t)-q^{*}\right\| \nonumber \\&=-2\gamma \theta (t)u(t)+2\theta (t)\left\| f(q^{*})-q^{*}\right\| \sqrt{u(t)}. \nonumber \end{aligned}$$
(3.3)

Hence, by applying Lemma 2.4, we get

$$\begin{aligned} \sqrt{u(t)}\le e^{-\gamma \Theta (t)}\sqrt{u(0)}+\frac{\left\| f(q^{*})-q^{*}\right\| }{\gamma }(1-e^{-\gamma \Theta (t)}), \end{aligned}$$

where

$$\begin{aligned} \Theta (t)=\int _{0}^{t}\theta (s)ds. \end{aligned}$$
(3.4)

We thus conclude that

$$\begin{aligned} \sup _{t\ge 0}\left\| x(t)-q^{*}\right\| \le \max \left( \left\| x_{0}-q^{*}\right\| ,\frac{\left\| f(q^{*})-q^{*}\right\| }{\gamma }\right) , \end{aligned}$$

which clearly implies that x(.) is bounded.

The third step: We prove here that \(x^{\prime }(t)\) converges strongly in \({\mathcal {H}}\) to 0 as \(t\rightarrow \infty .\) The main idea of the proof is inspired by the proof of [11, Lemma 8]. Let \(\delta >0.\) We define the function \(\omega \) on \([0,\infty )\) by \(\omega (t)=\left\| x(t+\delta )-x(t)\right\| ^{2}.\) Clearly,

$$\begin{aligned} \omega ^{\prime }(t)&=2\langle x(t+\delta )-x(t),x(t+\delta )-x(t)\rangle \nonumber \\&=-2\omega (t)+2\langle g(t+\delta ,x(t+\delta ))-g(t,x(t)),x(t+\delta )-x(t)\rangle \nonumber \\&=-2\omega (t)+2\langle g(t,x(t+\delta ))-g(t,x(t)),x(t+\delta )-x(t)\rangle \nonumber \\&\ \quad +2\langle g(t+\delta ,x(t+\delta ))-g(t,x(t+\delta )),x(t+\delta )-x(t)\rangle \nonumber \\&\le -2\gamma \theta (t)\omega (t)+2\left\| g(t+\delta ,x(t+\delta ))-g(t,x(t+\delta ))\right\| \sqrt{\omega (t)}\nonumber \\&\le -2\gamma \theta (t)\omega (t)+2M\left| \theta (t+\delta )-\theta (t)\right| \sqrt{\omega (t)}, \end{aligned}$$
(3.5)

where

$$\begin{aligned} M:=\sup _{s\ge 0}\left( \left\| f(x(s))\right\| +\left\| T(x(s)\right\| \right) . \end{aligned}$$
(3.6)

We notice here that M is finite since \(x(.)\in L^{\infty }([0,\infty ),{\mathcal {H}})\) and f and T are Lipschitz continuous functions. Applying now Lemma 2.4 to the inequality (3.5), we deduce that for every \(t\ge 0,\)

$$\begin{aligned} \left\| x(t+\delta )-x(t)\right\| \le e^{-\gamma \Theta (t)}\left\| x(\delta )-x(0)\right\| +Me^{-\gamma \Theta (t)}\int _{0}^{t}e^{\gamma \Theta (s)}\left| \theta (s+\delta )-\theta (s)\right| ds \end{aligned}$$

where \(\Theta \) is the function defined by (3.4).

Dividing the last inequality by \(\delta \) and letting \(\delta \rightarrow 0,\) we obtain

$$\begin{aligned} \left\| x^{\prime }(t)\right\| \le e^{-\gamma \Theta (t)}\left\| x^{\prime }(0)\right\| +Me^{-\gamma \Theta (t)}\int _{0}^{t}e^{\gamma \Theta (s)}\left| \theta ^{\prime }(s)\right| ds,t\ge 0. \end{aligned}$$
(3.7)

From the condition (C’\(_{2}\)), \(\Theta (t)\rightarrow \infty \) as \(t\rightarrow \infty ,\) which implies that the first term in the right-hand side of the inequality (3.7) converges to 0 at infinity. Therefore, to prove that \(\left\| x^{\prime }(t)\right\| \rightarrow 0\) as \(t\rightarrow \infty \) it suffices to verify that

$$\begin{aligned} r(t):=e^{-\gamma \Theta (t)}\int _{0}^{t}e^{\gamma \Theta (s)}\left| \theta ^{\prime }(s)\right| ds\rightarrow 0 \text { as } t\rightarrow \infty . \end{aligned}$$
(3.8)

At this point, we will make use of the condition (C\('_{5}\)). We, therefore, consider the two following cases:

The case where \(\int _{0}^{\infty }\left| \theta ^{\prime }(s)\right| ds<\infty \).

Let \(A>0.\) Since the function \(\Theta \) is increasing then for every \(t\ge A\)

$$\begin{aligned} r(t)\le e^{-\gamma \Theta (t)}\int _{0}^{A}e^{\gamma \Theta (s)}\left| \theta ^{\prime }(s)\right| ds+\int _{A}^{t}\left| \theta ^{\prime }(s)\right| ds. \end{aligned}$$

This inequality clearly implies

$$\begin{aligned} {\overline{\lim }}_{t\rightarrow \infty }~r(t)\le \int _{A}^{\infty }\left| \theta ^{\prime }(s)\right| ds. \end{aligned}$$

Therefore, by letting \(A\rightarrow \infty ,\) we get the desired result (3.8) thanks to the positivity of the function r(.).

The case where \(\lim _{t\rightarrow \infty }\frac{\left| \theta ^{\prime }(t)\right| }{\theta (t)}=0\).

Let \(A>0.\) for every \(t\ge A,\)

$$\begin{aligned} r(t)&\le e^{-\gamma \Theta (t)}\int _{0}^{A}e^{\gamma \Theta (s)}\left| \theta ^{\prime }(s)\right| ds+e^{-\gamma \Theta (t)}\int _{A}^{t} e^{\gamma \Theta (s)}\theta (s)ds~\sup _{s\ge A}\frac{\left| \theta ^{\prime }(s)\right| }{\theta (s)}\nonumber \\&\le e^{-\gamma \Theta (t)}\int _{0}^{A}e^{\gamma \Theta (s)}\left| \theta ^{\prime }(s)\right| ds+\frac{1}{\gamma }(1-e^{-\gamma (\Theta (t)-\Theta (A)})\sup _{s\ge A}\frac{\left| \theta ^{\prime }(s)\right| }{\theta (s)}\nonumber \\&\le e^{-\gamma \Theta (t)}\int _{0}^{A}e^{\gamma \Theta (s)}\left| \theta ^{\prime }(s)\right| ds+\frac{1}{\gamma }\sup _{s\ge A}\frac{\left| \theta ^{\prime }(s)\right| }{\theta (s)}. \end{aligned}$$
(3.9)

Hence, by letting, respectively, t and A go to \(\infty \), we get \({\overline{\lim }}_{t\rightarrow \infty }~r(t)\le 0,\) which implies again the required result (3.8).

The fourth step: We prove here that

$$\begin{aligned} r^{*}:={\overline{\lim }}_{t\rightarrow \infty }\langle f(q^{*})-q^{*},x(t)-q^{*}\rangle \le 0. \end{aligned}$$

Since \(x(.)\in L^{\infty }([0,\infty ),{\mathcal {H}}),\) there exist \(x_{\infty }\in {\mathcal {H}}\) and a sequence of positive real numbers \((t_{n})\) which tends to \(\infty \) such that \((x(t_{n}))_{n}\) converges weakly in \({\mathcal {H}}\) to \(x_{\infty }\) and \(r^{*}=\langle f(q^{*})-q^{*},x_{\infty }-q^{*}\rangle .\) On the other hand, since x(.) is a trajectory of (CDS),

$$\begin{aligned} \left\| x(t)-T(x(t))\right\|&\le \theta (t)\left( \left\| f(x(t))\right\| +\left\| T(x(t))\right\| \right) +\left\| x^{\prime }(t)\right\| \nonumber \\&\le 2M~\theta (t)+\left\| x^{\prime }(t)\right\| , \end{aligned}$$
(3.10)

for every \(t\ge 0\), where the constant M is given by (3.6). Hence the condition (C’\(_{1})\) combined with the fact that \(\left\| x^{\prime }(t)\right\| \rightarrow 0\) as \(t\rightarrow \infty \) implies that the sequence \((x(t_{n} )-T(x(t_{n})))_{n}\) converges strongly in \({\mathcal {H}}\) to 0. Therefore, by invoking Lemma 2.3, we deduce that \(x_{\infty }\in \text {Fix}(T)\) which in turn implies that

$$\begin{aligned} r^{*}=\langle f(q^{*})-q^{*},x_{\infty }-q^{*}\rangle \le 0. \end{aligned}$$

The fifth step: We prove here that \(x(t)\rightarrow q^{*}\) strongly in \({\mathcal {H}}\) as \(t\rightarrow \infty .\)

Define \(u(t)=\left\| x(t)-q^{*}\right\| ^{2}.\) From (3.3), we have for every \(t\ge 0,\)

$$\begin{aligned} u^{\prime }(t)+2\gamma \theta (t)u(t)\le 2\theta (t)w(t), \end{aligned}$$

where

$$\begin{aligned} w(t):=\max \{\langle f(q^{*})-q^{*},x(t)-q^{*}\rangle ,0\}. \end{aligned}$$

By integrating the last differential inequality, we get

$$\begin{aligned} u(t)\le e^{-2\gamma \Theta (t)}u(0)+2e^{-2\gamma \Theta (t)}\int _{0}^{t} e^{2\gamma \Theta (s)}\theta (s)w(s)ds \end{aligned}$$

for every \(t\ge 0\). Let us notice here, that from the previous step we clearly have \(w(t)\rightarrow 0\) as \(t\rightarrow \infty .\) Hence, by following the same procedure leading to the estimation (3.9), we easily get \(e^{-2\gamma \Theta (t)}\int _{0}^{t} e^{2\gamma \Theta (s)}\theta (s)w(s)ds\) \(\rightarrow 0\) as \(t\rightarrow \infty .\) We, therefore, conclude that \(u(t)\rightarrow 0\) as \(t\rightarrow \infty .\) This completes the proof. \(\square \)

In the second part of this section, we prove that the dynamical system (CDS) is stable under the effect of any perturbation on the initial data and any relatively small error computation. Precisely, we prove the following result.

Theorem 3.4

Let x(.) be a trajectory of the dynamical system (CDS) and y(.) a trajectory of a perturbed version (CDS) with error term e(.) in the sense of Definition 3.2.Then, for every \(t\ge 0\),

$$\begin{aligned} \left\| y(t)-x(t)\right\| \le e^{-\gamma \Theta (t)}\left\| y(0)-x(0)\right\| +e^{-\gamma \Theta (t)}\int _{0}^{t} e^{\gamma \Theta (s)} \left\| e(t)\right\| ds. \end{aligned}$$
(3.11)

Moreover, if the error function e(.) satisfies either \(\int _0^{\infty } \left\| e(t)\right\| dt<\infty \) or \(\lim _{t\rightarrow \infty }\frac{\left\| e(t)\right\| }{\theta (t)}=0\) then y(t) converges strongly in \({\mathcal {H}}\) as \(t\rightarrow \infty \) to \(q^{*}\) the unique solution to the variational inequality problem (VIP).

Proof

Let v be the function defined on \([0,\infty )\) by \(v(t)=\left\| x(t)-y(t)\right\| ^{2}.\) Clearly, for every \(t\ge 0,\) we have

$$\begin{aligned} v^{\prime }(t)&=2\langle x^{\prime }(t)-y^{\prime }(t),x(t)-y(t)\rangle \\&=-2v(t)+2\langle \theta (t)(f(x(t))-f(y(t))+(1-\theta (t))(T(x(t))-T(y(t)))- e(t),x(t)-y(t)\rangle \end{aligned}$$

Hence, using the Cauchy–Schwarz inequality, we easily get

$$\begin{aligned} v^{\prime }(t) +2\gamma \theta (t)v(t)\le 2 \left\| e(t)\right\| \sqrt{v(t)}. \end{aligned}$$

Therefore, by applying Lemma 2.4, we deduce that for every \(t\ge 0\)

$$\begin{aligned} \sqrt{v(t})\le e^{-\gamma \Theta (t)}\sqrt{v(0)}+e^{-\gamma \Theta (t)}\int _{0}^{t} e^{\gamma \Theta (s)} \left\| e(t)\right\| ds. \end{aligned}$$

This ends the proof of (3.11). On the other hand, by repeating exactly the same arguments leading to (3.8) in the proof of the theorem 3.3 and using the assumption on the error term e(.), we get

$$\begin{aligned} e^{-\gamma \Theta (t)}\int _{0}^{t} e^{\gamma \Theta (s)}\left\| e(t)\right\| ds\rightarrow 0 \text { as } t\rightarrow \infty . \end{aligned}$$

We, therefore, deduce that \(\left\| y(t)-x(t)\right\| \rightarrow 0\) as \(t\rightarrow \infty .\) Recalling finally that, from Theorem 3.3, \(x(t)\rightarrow q^{*}\) strongly in \({\mathcal {H}}\) as \(t\rightarrow \infty \), we thus conclude that y(t) converges strongly in \({\mathcal {H}}\) as well to the same limit \(q^{*}\) as \(t\rightarrow \infty .\) \(\square \)

4 On the convergence rate of trajectories of a perturbed version of the dynamical system (CDS)

In this section, we first establish a precise estimation on the rate of convergence of the residual term \(y(t)-T(y(t))\) to 0 at infinity where y(.) is a trajectory of a perturbed version of the dynamical system (CDS) ( in the sense of Definition 3.2) in the particular case where \(\theta (t)=\frac{K}{(1+t)^{\nu }}\) with \(0<\nu \le 1\) and \(K>0\), and the size \(\left\| e(t)\right\| \) of the error term is \(O(\frac{1}{t^\mu })\) at infinity with \(\mu > \min \{1,\nu \}\). Then we deduce an estimate on the rate of convergence of y(t) to \(q^{*}\) in the particular case where T is a contraction mapping. At the second part of the section, we test the optimality of the obtained result through the study of some particular examples of the perturbed and the exact versions of (CDS).

Theorem 4.1

We assume here that \(\theta (t)=\frac{K}{(1+t)^{\nu }}\) with \(0<\nu <1\) and \(K>0\). Let y be a solution of a perturbed version of (CDS) with error term e(.). If the function e(.) satisfies \( \left\| e(t)\right\| =O(\frac{1}{t^\mu })\) as \(t\rightarrow \infty \) with \(\mu > \nu \), then y(t) converges strongly in \({\mathcal {H}}\) as \(t\rightarrow \infty \) to \(q^{*}\) and satisfies

$$\begin{aligned} \left\| y(t)-T(y(t))\right\| =O\left( \frac{1}{t^{\min \{\nu ,\mu -\nu \}}}\right) \text { as } t\rightarrow \infty . \end{aligned}$$
(4.1)

Moreover, if we assume in addition that the mapping T is a contraction then

$$\begin{aligned} \left\| y(t)-q^*\right\| =O\left( \frac{1}{t^{\min \{\nu ,\mu -\nu \}}}\right) \text { as } t\rightarrow \infty . \end{aligned}$$
(4.2)

Proof

First, the strong convergence of y(t) in \({\mathcal {H}}\) as \(t\rightarrow \infty \) to \(q^{*}\) follows immediately from Theorem 3.4. Let us now prove the estimate (4.1). Let x(.) be the trajectory of (CDS) associated to the initial data \(x_0=y(0).\) Then, from Theorem 3.4, we have

$$\begin{aligned} \left\| y(t)-x(t)\right\| \le e^{-\gamma \Theta (t)}\int _{0}^{t} e^{\gamma \Theta (s)} \left\| e(t)\right\| ds \end{aligned}$$
(4.3)

for every \(t\ge 0\). Let us recall here that in the proof of Theorem 3.3 (see (3.7) and (3.10)), we have established that there exists a constant \(M>0\) such that for every \(t\ge 0,\)

$$\begin{aligned} \left\| x(t)-T(x(t))\right\| \le M~\theta (t)+\left\| x^{\prime }(t)\right\| \end{aligned}$$
(4.4)

and

$$\begin{aligned} \left\| x^{\prime }(t)\right\| \le e^{-\gamma \Theta (t)}\left\| x^{\prime }(0)\right\| +Me^{-\gamma \Theta (t)}\int _{0}^{t}e^{\gamma \Theta (s)}\left| \theta ^{\prime }(s)\right| ds. \end{aligned}$$
(4.5)

Hence, by employing the triangular inequality and combining the estimates (4.3),(4.4), and (4.5), we deduce that for every \(t\ge 0\)

$$\begin{aligned} \left\| y(t)-T(y(t))\right\|&\le \left\| y(t)-x(t)\right\| +\left\| x(t)-T(x(t))\right\| +\left\| T(x(t))-T(y(t))\right\| \\&\le 2\left\| y(t)-x(t)\right\| +\left\| x(t)-T(x(t))\right\| \\&\le M^{\prime }\left( \theta (t)+e^{-\gamma \Theta (t)}+e^{-\gamma \Theta (t)}\int _{0}^{t}e^{\gamma \Theta (s)}\frac{1}{(1+s)^{\nu +1}}ds\right. \\&\qquad \left. + e^{-\gamma \Theta (t)}\int _{0}^{t}e^{\gamma \Theta (s)}\frac{1}{(1+s)^\mu }ds\right) , \end{aligned}$$

where \(M^{\prime }\) is a nonnegative real constant independent of \(t\ge 0\). Hence a direct application of Lemma 2.6 to the two last terms of the last inequality yields the required estimate (4.1). Let us finally prove (4.2) in the case where the mapping T is a contraction. Let \(\beta \in [0,1)\) be the contraction coefficient of T. For every \(t\ge 0\), we have

$$\begin{aligned} \left\| y(t)-q^{*}\right\|&\le \left\| y(t)-T(y(t))\right\| +\left\| T(y(t))-T(q^{*})\right\| ,\nonumber \\&\le \left\| y(t)-T(y(t))\right\| +\beta \left\| y(t)-q^{*}\right\| , \end{aligned}$$
(4.6)

which implies that

$$\begin{aligned} \left\| y(t)-q^{*}\right\| \le \frac{1}{1-\beta } \left\| y(t)-T(y(t))\right\| . \end{aligned}$$

Hence, the estimation (4.2) follows immediately by invoking the estimate (4.1). \(\square \)

Remark 4.2

The results of the previous theorem can be extended to cover the limit case \( \nu =1.\) To see this, let us assume that \(\theta (t)=\frac{K}{1+t}\) with \(K>0\) and \(\left\| e(t)\right\| =O(\frac{1}{t^{\mu }})\) at infinity with \( \mu >1.\) It is clear from Theorem 3.4 that the strong convergence of y(t) to \( q^{*}\) as \(t\rightarrow \infty \) still holds. On the other hand, in this case, since \(e^{\gamma \Theta (t)}=(1+t)^{K\gamma }\), the estimation (4.6) becomes

$$\begin{aligned} \left\| y(t)-T(y(t))\right\| \le M_1\left( \frac{1}{1+t}+\frac{1}{ (1+t)^{K\gamma }}+\frac{1}{(1+t)^{K\gamma }}\int _{0}^{t}\frac{1}{ (1+s)^{2-K\gamma }}+\frac{1}{(1+s)^{\mu -K\gamma }}ds\right) , \end{aligned}$$

where \(M_1\) is a constant independent of \(t\ge 0.\) Hence, a combination of this last inequality with the classical result

$$\begin{aligned} \int _{0}^{t}\frac{ds}{(1+s)^{\delta }}\sim \left\{ \begin{array}{lll} \frac{t^{1-\delta }}{(1-\delta )} &{} \text {if} &{} \delta <1, \\ \ln t &{} \text {if} &{} \delta =1 \\ \frac{1}{\delta -1} &{} \text {if} &{} \delta >1 \end{array} \right. \text { as }t\rightarrow \infty \end{aligned}$$

gives, after a fine discussion on the parameters \(\mu \), \(\gamma \) and K, a precise estimation on the norm of the residual term \(y(t)-T(y(t))\) as \(t\rightarrow \infty \). In particular, if \(K\gamma >1\) and \(\mu \ge 2\), we obtain

$$\begin{aligned} \left\| y(t)-T(y(t))\right\| =O\left( \frac{1}{t}\right) \text { as } t\rightarrow \infty . \end{aligned}$$
(4.7)

Remark 4.3

From the proof of the estimations (4.1) and (4.7), one can easily notice that, in the case where y(.) is a trajectory of the system (CDS) (i.e., the error term e(t) is equal to 0), the estimation (4.7) still holds true and (4.1) becomes

$$\begin{aligned} \left\| y(t)-T(y(t))\right\| =O\left( \frac{1}{t^{\nu }}\right) \text { as } t\rightarrow \infty . \end{aligned}$$

Remark 4.4

We will show later that the first part of Theorem 4.1 concerning the strong convergence of y(t) to \(q^{*}\) at infinity does not remain true if \(\theta (t)=\frac{K}{(1+t)^{\nu }}\) when \(\nu >1\) and \( K>0.\) But I do not know if the estimation (4.1) about the speed of the convergence of the residual term \( y(t)-T(y(t))\) is still true or not in case \(\theta (t)=\frac{K}{ (1+t)^{\nu }}\) with \(\nu \ge 1\) and \(K>0.\)

Now we will present a few straightforward examples to demonstrate the optimality of the results stated in Theorem 4.1.

Example 4.5

We consider here a simple one-dimensional example of a perturbed version of the system (CDS). Precisely, we assume that \({\mathcal {H}}={\mathbb {R}},\) \(C={\mathcal {H}},\) \( T(x)=x,~f(x)=u\) where u is a real constant, \(\theta (t)=\frac{1}{ (1+t)^{\nu }}\) with \(\nu >0\) and \(e(t)=\frac{b}{(1+t)^{\mu }}\) with \(\mu >0\) and b is a real constant different from 0. In this case, the solution of (VIP) is clearly \(q^{*}=u.\) Let \(y\in C^{1}([0,\infty ),{\mathbb {R}})\) be the solution of the perturbed version of (CDS) with the error e(.). Let \(y_0=y(0)\). Clearly, y(.) satisfies the Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{l} y^{\prime }(t)+\theta (t)y(t)=\theta (t)u+e(t) \\ y(0)=y_{0} \end{array} \right. \end{aligned}$$

This implies that

$$\begin{aligned} y(t)= & {} u+e^{-\Theta (t)}(y_{0}-u)+b~e^{-\Theta (t)}\int _{0}^{t}\frac{ e^{\Theta (s)}}{(1+s)^{\mu }}ds \\= & {} u+e^{-\Theta (t)}(y_{0}-u)+R(t). \end{aligned}$$

We can then easily deduce the following results:

  1. (1)

    If \(\nu <1\) and \(\mu >\nu ,\) then \(y(t)\rightarrow q^{*}=u\) as \( t\rightarrow \infty \) since, according to Lemma 2.6, \(R(t)\sim \frac{v}{t^{\mu -\nu }}\) which converges to 0 as \(t \rightarrow \infty \).

  2. (2)

    If \(\nu <1\) and \(\mu =\nu ,\) then y(t) converges as \(t\rightarrow \infty \) to \(q=u+w\) which means that in this case the error e(t) has a real effect on the asymptotic behavior of the solution y(.). This proves that the condition \(\mu >\nu \) in Theorem 4.1 is optimal.

  3. (3)

    if \(\nu >1\) and \(w=0,\) then y(t) converges as \(t\rightarrow \infty \) to the limit point \(\ (1-e^{-\frac{1}{\nu -1}})u+e^{-\frac{1}{\nu -1}}y_{0}\) which is in general different from \(q^{*}=u.\) This proves that the first part of Theorem 4.1 does not hold if \(\nu >1\), as we have already mentioned in Remark 4.4.

Example 4.6

We are going to show the optimality of (4.2) through the study of an other simple one-dimensional example of the system (CDS). We assume that: \({\mathcal {H}}= {\mathbb {R}},\) \(C={\mathcal {H}},\) \(~f(x)=u\), where u is a real constant different from 0, \(\theta (t)=\frac{1}{ (1+t)^{\nu }}\) with \(v\in (0,1),\) \(T(x)=\frac{x}{2}\) and x(.) is a trajectory of the associated dynamical system (CDS) where \(x_{0}\in \mathbb {R }\) is a given initial data. Clearly, for every \(t\ge 0,\)

$$\begin{aligned} x^{\prime }(t)+\frac{1+\theta (t)}{2}x(t)=\theta (t)u. \end{aligned}$$

Then,

$$\begin{aligned} x(t)=e^{-\Gamma (t)}x_{0}+u~e^{-\Gamma (t)}\int _{0}^{t}e^{\Gamma (s)}\theta (s)ds, \end{aligned}$$

where \(\Gamma (t)=\int _{0}^{t}\frac{1+\theta (s)}{2}ds.\) Since \( e^{-\Gamma (t)}x_{0}=O\left( e^{-\frac{t}{2}}\right) \), then an immediate application of Lemma 2.6 to the second right-hand term of the last inequality gives

$$\begin{aligned} \left| x(t)\right| \sim \frac{2\left| u\right| }{t^{\nu }} \text { as }t\rightarrow \infty . \end{aligned}$$

This proves the optimality of the estimations (4.1) and (4.2) since, in this particular example, \(q^{*}=0\) and \(\left| x(t)-T(x(t))\right| = \frac{\left| x(t)\right| }{2}.\)

Example 4.7

We assume here that \({\mathcal {H}}\) is the classical Euclidean space \({\mathbb {R}}^{2},\) \( C={\mathcal {H}},\) \(T((x_{1},x_{2})^{T})=(x_{2},x_{1})^{T}\), \( f((x_{1},x_{2})^{T})=(u_{1},u_{2})^{T}\) where \(u=(u_{1},u_{2})^{T}\) is a given element of \({\mathbb {R}}^{2}\), and as in previous examples \(\theta (t)= \frac{1}{(1+t)^{\nu }}\) with \(\nu \in (0,1).\) It is clear that in this case \( \text {Fix}(T)=\{(x_{1},x_{2})^{T}\in {\mathbb {R}}^{2}:x_{1}=x_{2}\}\) and the unique solution of (VIP) is \(q^{*}=({\bar{u}},{\bar{u}})^{T}\) where \({\bar{u}}=\frac{ u_{1}+u_{2}}{2}.\) Let \(x(.)=(x_{1}(.),x_{2}(.))^{T}\) be the trajectory of (CDS) with a given initial data \(x_{0}=(x_{01},x_{02})^{T}.\) To study the asymptotic behavior of x(t) as \(t\rightarrow \infty ,\) we introduce the two auxiliary functions \(v(t)=\frac{x_{1}(t)+x_{2}(t)}{2}\) and \(w(t)=\frac{ x_{1}(t)-x_{2}(t)}{2}.\) Clearly, these functions satisfy the differential equations

$$\begin{aligned} v^{\prime }(t)+\theta (t)v(t)= & {} \theta (t){\bar{u}} \\ w^{\prime }(t)+(2-\theta (t))w(t)= & {} \theta (t){\tilde{u}} \end{aligned}$$

where \({\tilde{u}}=\frac{u_{1}-u_{2}}{2}\). Let us, for the sake of simplicity, assume that \({\tilde{u}}\ne 0.\) By proceeding exactly as in the previous examples, we can easily get

$$\begin{aligned} v(t)= & {} {\bar{u}}+e^{-\Theta (t)}(v(0)-{\bar{u}}) \\ w(t)\sim & {} \frac{\theta (t)}{2}{\tilde{u}} \text { as } as t\rightarrow \infty . \end{aligned}$$

We, therefore, deduce that \(x(t)=v(t)+w(t)\rightarrow {\bar{u}}=q^{*}\) and \(\left\| x(t)-T(x(t))\right\| =2\sqrt{2}\left| w(t)\right| \sim \sqrt{2}\frac{\left| {\bar{u}}\right| }{t^{\nu }}\) as \(t\rightarrow \infty .\) This confirms again the precision of the results of Theorem 4.1.