1 Introduction

Let \(\mathcal {H}\), \(\mathcal {G}\) be Hilbert spaces. We consider the problem of finding \(p\in \mathcal {H}\) such that

$$\begin{aligned}&0 \in Ap+L^{*}BLp, \end{aligned}$$
(P)

where \( A: \mathcal {H}\rightarrow \mathcal {H}, \ B: \mathcal {G}\rightarrow \mathcal {G}\) are maximally monotone operators, \(L: \mathcal {H}\rightarrow \mathcal {G}\) is a bounded, linear operator. Together with problem (P) we consider the dual problem formulated as finding \(v^{*}\in \mathcal {G}\) such that

$$\begin{aligned} 0 \in -LA^{-1}(-L^{*}v^{*})+B^{-1}v^{*}. \end{aligned}$$
(D)

To problems (P) and (D) we associate Kuhn-Tucker set defined as

$$\begin{aligned} Z{:}{=} \{ (p,v^{*}) \in \mathcal {H}\times \mathcal {G}\mid -L^{*} v^{*} \in Ap \quad \text {and}\quad Lp\in B^{-1}v^{*} \}. \end{aligned}$$
(Z)

The set Z is nonempty if and only if there exists a solution of the primal problem (P) and to the dual problem (D) (see [26, Corollary 2.12]).

Our aim in this paper is to investigate, for a given \(x_{0}, {\bar{w}}\in \mathcal {H}\times \mathcal {G}\), the following dynamical system, solution of which asymptotically approaches solution of (P)-(D),

$$\begin{aligned}&{\dot{x}}(t)=Q({\bar{w}},x(t),{\mathbb {T}} x(t))-x(t),\quad t\ge 0,\\&x(0)=x_0, \end{aligned}$$
(S)

where \({\mathbb {T}} : \mathcal {H}\times \mathcal {G}\rightarrow \mathcal {H}\times \mathcal {G}\), fixed point set of the operator \({\mathbb {T}}\) is Z (\(\text {Fix}{\mathbb {T}} = Z\)), with Z defined by (Z) and\(Q:(\mathcal {H}\times \mathcal {G})^{3}\rightarrow \mathcal {H}\times \mathcal {G}\),

$$\begin{aligned} Q({\bar{w}},b,c){:}{=}P_{H({\bar{w}},b)\cap H(b,c)}({\bar{w}}), \end{aligned}$$
(1)

is the projection P of the element \({\bar{w}}\) onto the set \(H({\bar{w}},b)\cap H(b,c)\) which is the intersection of two hyperplanes of the form

$$\begin{aligned} \begin{aligned}&H(z_1,z_2){:}{=}\{ h\in \mathcal {H}\times \mathcal {G}\mid \langle h-z_{2} | z_{1}-z_{2} \rangle \le 0 \},\quad z_1,z_2 \in \mathcal {H}\times \mathcal {G}. \end{aligned} \end{aligned}$$
(2)

In particular, \( H({\bar{w}},b)=\{ h\in \mathcal {H}\times \mathcal {G}\mid \langle h-b | {\bar{w}}-b \rangle \le 0 \}. \)

Under explicit discretization with step size equal to one the system (S) becomes the best approximation algorithm for finding fixed point of \({\mathbb {T}}\) introduced in [2, Proposition 2.1] (see also [6, Theorem 30.8]),

$$\begin{aligned} x_{n+1}=Q({\bar{w}},x_n, x_{n+1/2}),\quad n\in {\mathbb {N}} \end{aligned}$$
(3)

with the choice of \(x_{n+1/2}{:}{=}{\mathbb {T}}(x_n)\) and the starting point \(x_0\). The characteristic feature of this algorithm is the strong convergence of the sequence \(x_n\) to a fixed point of \({\mathbb {T}}\) (see also [5]). In contrast to this, a dynamical system investigated, e.g. in [11], is related to other primal–dual method which exhibits weak convergence.

In case when \(A=\partial f\), \(B=\partial g\), \(f: \mathcal {H}\rightarrow {\mathbb {R}}\cup \{+\infty \}\), \(g: \mathcal {H}\rightarrow {\mathbb {R}}\cup \{+\infty \}\) are proper convex, lower semicontinuous (l.s.c.) functions, the problem (P) (if solvable) reduces to finding a point \(p\in \mathcal {H}\) solving the following minimization problem (see [27])

$$\begin{aligned} \text {minimize}_{p\in \mathcal {H}}\quad f(p)+g(Lp) \end{aligned}$$
(4)

and (D) reduces to finding a point \(v^{*}\in \mathcal {G}\) solving the following maximization problem

$$\begin{aligned} \text {maximize}_{v^{*} \in \mathcal {G}} -f^{*}(-L^{*}v^{*})-g^{*}(v^{*}). \end{aligned}$$
(5)

First order dynamical systems related to optimization problems have been discussed by many authors (see, e.g., [1, 4, 9, 10, 12]). In those papers, a natural assumption is that the vector field F is globally Lipschitz and consequently, the existence and uniqueness of solutions to the dynamical system is guaranteed by classical results (see e.g. [13, Theorem 7.3]). For instance, Abbas, Attouch and Svaiter considered the following system in [1]

$$\begin{aligned} \begin{aligned}&{\dot{x}}(t)+x(t)=\text {prox}_{\mu \varPhi } (x(t)-\mu B(x(t))),\\&x(0)=x_0 , \end{aligned} \end{aligned}$$
(6)

where \(\varPhi : \mathcal {H}\rightarrow {\mathbb {R}}\cup \{+\infty \}\) is a proper, convex and l.s.c. function defined on a Hilbert space \(\mathcal {H}\), \(B: \mathcal {H}\rightarrow \mathcal {H}\) is \(\beta \)-cocoercive operator and \(\text {prox}_{\mu \varPhi }: \mathcal {H}\rightarrow \mathcal {H}\) is a proximal operator defined as

$$\begin{aligned} \text {prox}_{\mu \varPhi }(x)=\arg \min _{y\in \mathcal {H}} \{ \varPhi (y)+\frac{1}{2\mu }\Vert x-y\Vert ^2 \}. \end{aligned}$$

Furthermore, Boţ and Csetnek, in [9], studied the dynamical system

$$\begin{aligned} \begin{aligned}&{\dot{x}}(t)=\uplambda (t)(T(x(t))-x(t)),\quad t\ge 0\\&x(0)=x_0, \end{aligned} \end{aligned}$$
(7)

where \(T: \mathcal {H}\rightarrow \mathcal {H}\) is a nonexpansive operator, \(\uplambda : [0,\infty ) \rightarrow [0,1]\) is a Lebesgue measurable function. By applying in (7) operator T defined as \(T=J_{\gamma A}( Id- \gamma B )\), where \(A: \mathcal {H}\rightarrow \mathcal {H}\) is a maximally monotone operator, the system (7), under special discretization (see e.g. [9, Remark 8]), leads to the forward-backward algorithm for solving operator inclusion problems in the form

$$\begin{aligned} \text {find} ~x\in \mathcal {H}\quad s.t. \quad 0\in A(x)+B(x). \end{aligned}$$

For other discretizations see e.g., [28, Section 2.3].

The most essential difference between (S) and the systems (6), (7) is that, in general, one cannot expect the vector field Q given in (S) is globally Lipschitz with respect to variable x as it is the case of dynamical systems (6) and (7).

The contribution of the present investigation is as follows. We formulate the problem and provide preliminary facts in Sects. 2 and 3, respectively. In Sect. 4 we prove the existence and uniqueness of solutions to dynamical system (S) by studying a more general problem (DS-0). Extendability of solutions to dynamical system (DS-0) is studied in Sect. 5. The behaviour at \(+ \infty \) of solutions to (DS-0) is investigated in Sect. 6. In Sect. 7 we present applications of the results obtained for (DS-0) to projected dynamical systems (PDS).

2 Formulation of the Problem

Suppose that the set Z given by (Z) is nonempty. Then for all \(x\in \mathcal {H}\times \mathcal {G}\), \(Z\subset H(x,{\mathbb {T}} x)\). Let \({\bar{w}}\in \mathcal {H}\times \mathcal {G}\) and \({\bar{z}}=P_{Z}({\bar{w}})\). Let us define an open ball in Hilbert space \(\mathcal {H}\times \mathcal {G}\) centered at \(a\in \mathcal {H}\times \mathcal {G}\) with some radius \(R>0\) as follows:

$$\begin{aligned} {\mathbb {B}}(a,R){:}{=}\{ x\in \mathcal {H}\times \mathcal {G}\mid \Vert a-x\Vert <R\} \end{aligned}$$

and its closure by

$$\begin{aligned} \bar{{\mathbb {B}}}(a,R){:}{=}\{ x\in \mathcal {H}\times \mathcal {G}\mid \Vert a-x\Vert \le R\}. \end{aligned}$$

We limit ourselves to a closed subset \({\mathcal {D}}\subset \mathcal {H}\times \mathcal {G}\) such that for all \(x \in {\mathcal {D}}\) we have \({\bar{z}}\in H({\bar{w}},x)\). This latter conditions ensures that \({\bar{z}}\) is an equilibrium point of

$$\begin{aligned} Q({\bar{w}},\cdot , {\mathbb {T}}(\cdot )): {\mathcal {D}}\rightarrow {\mathcal {D}}. \end{aligned}$$

The fact that

$$\begin{aligned} x \in \bar{{\mathbb {B}}}\left( \frac{{\bar{w}}+{\bar{z}}}{2},\frac{\Vert {\bar{w}}-{\bar{z}}\Vert }{2}\right) \text { if and only if } \langle {\bar{z}} - x | {\bar{w}}-x \rangle \le 0, \end{aligned}$$
(8)

implies the following

$$\begin{aligned} Z\subset H({\bar{w}},x) \implies {\bar{z}}\in H({\bar{w}},x) \iff x \in \bar{{\mathbb {B}}}\left( \frac{{\bar{w}}+{\bar{z}}}{2},\frac{\Vert {\bar{w}}-{\bar{z}}\Vert }{2}\right) . \end{aligned}$$

Therefore, we will limit our attention to \(Q({\bar{w}},\cdot ,{\mathbb {T}} (\cdot ))\) given by (1) to be defined on \({\mathcal {D}}\subset \bar{{\mathbb {B}}}\left( \frac{{\bar{w}}+{\bar{z}}}{2},\frac{\Vert {\bar{w}}-{\bar{z}}\Vert }{2}\right) \).

Let us note that for \(x={\bar{w}}\) we have \(H({\bar{w}},x)=\mathcal {H}\times \mathcal {G}\). This motivates us to restrict our investigations to set \({\hat{\mathcal {D}}}{:}{=}{\mathcal {D}} \setminus B({\bar{w}},r)\) for some \(r>0\) such that \({\hat{\mathcal {D}}}\) is nonempty.

System (S) is an autonomous dynamical system of the form

$$\begin{aligned}&{\dot{x}}(t)=F(x(t)),\quad t\ge 0,\\&x(0)=x_0\in {\hat{{\mathcal {D}}}}\setminus \{ {\bar{z}} \}, \end{aligned}$$
(DS)

where \(F: {\hat{\mathcal {D}}} \rightarrow \mathcal {X}\), \(\mathcal {X}\) - Hilbert space, is a continuous function, locally Lipschitz on \({\hat{\mathcal {D}}}\) except a single point \({\bar{z}}\in {\hat{\mathcal {D}}}\), and \({\hat{\mathcal {D}}}\) is a closed and bounded set in \(\mathcal {X}\). Indeed, when \(F(x){:}{=}Q({\bar{w}},x,{\mathbb {T}} x)-x\), where \({\mathbb {T}}: \mathcal {H}\times \mathcal {G}\rightarrow \mathcal {H}\times \mathcal {G}\) is defined as in (30) and \(Q: (\mathcal {H}\times \mathcal {G})^3 \rightarrow \mathcal {H}\times \mathcal {G}\) is defined in (2), the system (DS) reduces to (S). For other applications we refer the reader to Sect. 7.

A survey of existing results on solvability and uniqueness of solutions going beyond the classical Cauchy–Picard theorem from finite to infinite settings journey can be found in [20].

Main difficulties in investigating the existence to autonomous ODE in infinite-dimensional settings are due to the lack of compactness, see [22, Remark 5.1.1]. For instance, the continuity of the right-hand side vector field F is not enough to obtain the counterpart of Peano’s theorem in infinite-dimensional spaces [17], even in Hilbert spaces [34].

In [18] Godunov proved that in every infinite-dimensional Banach space there exists a continuous vector field F such that there is no solution to the related (DS) whereas the global Lipschitz condition, due to Cauchy-Lipschitz-Picard-Lindeloff, of the right-hand side field ensures the uniqueness and/or extendability of the solution, see [13, Theorem 7.3]. Some attempts to weaken the global Lipschitz condition of the right-hand side vector field have been done in the context of the existence of solutions, see, e.g., [22, Theorem 5.1.1] and [19, 23, 30, 31] and the references therein. It is observed that the local Lipschitzness of the vector filed allows to prove the local existence and uniqueness for the related problems. For instance, one can adapt [22, Theorem 5.1.1] to the case of autonomous differential system in the following way

Corollary 1

Define the rectangle \(R_0 = \{ x \in \mathcal {X}| \Vert x-x_0\Vert \le \beta \}\) for some \(\beta >0\). Let \(f: R_0 \rightarrow \mathcal {X}\). Assume that \(\Vert f(x)\Vert \le {\tilde{M}}\) for \(x\in R_0\) and \(\Vert f(x_1)-f(x_2)\Vert \le K \Vert x_1-x_2\Vert \) for \(x_1,x_2\in R_0\), where K and \({\tilde{M}}\) are nonnegative constants. Let \({\alpha }>0\) such that \(\alpha \le \frac{\beta }{{\tilde{M}}}\). Then there exists one and only one (strongly) continuously differentiable function x(t) satisfying

$$\begin{aligned}&{\dot{x}}(t)=f(x(t)),\quad |t-t_0|\le \alpha ;\quad x(t_0)=x_0. \end{aligned}$$

Let us note that Corollary 1 is non-applicable to system (DS) in case when \(x_{0} \notin \text {int} {\hat{\mathcal {D}}}\) (see also Remark 4 below). Moreover, it was shown that local Lipschitzness condition is not enough to guarantee existence of trajectories on \([t_{0},+\infty )\) (see e.g., [21] and references therein). Instead of this, in Sects. 4 and 5 we will be using modified standard techniques to show the existence and uniqueness of solutions to (DS).

In [14] a smooth vector field is constructed such that the respective autonomous dynamical system has a bounded maximal solution which is not globally defined.

In finite-dimensional settings, under the assumption of local Lipschitzness and some boundedness of the vector field, the existence and uniqueness of the trajectory on \([t_{0},+\infty )\) are shown in [32] by Xia and Wang. The authors applied their results to investigations of projected dynamical systems.

3 Preliminaries

In this section we formulate the system (S) (and (DS)) in the general form.

Let \({\bar{w}},{\bar{z}} \in \mathcal {X}\) and the associated norm in Hilbert space \(\mathcal {X}\) be defined as \(\Vert \cdot \Vert =\sqrt{\langle \cdot | \cdot \rangle }\). Let \({\mathcal {D}}\subset \mathcal {X}\) be a closed convex subset of \(\mathcal {X}\) such that \({\bar{w}},{\bar{z}}\in {\mathcal {D}}\) and

$$\begin{aligned} \langle {\bar{z}} - x \mid {\bar{w}}-x \rangle \le 0\quad \text {for all}\ x\in {\mathcal {D}}. \end{aligned}$$
(9)

Note that the condition (9) immediately implies that \({\bar{w}}\) and \({\bar{z}}\) are boundary points of the set \({\mathcal {D}}\).

Let r be such that \(\Vert {\bar{w}}-{\bar{z}}\Vert ^{2}>r>0\). Throughout this paper, we consider set \({\hat{{\mathcal {D}}}}\) related to \({\mathcal {D}}\) (see Fig. 1):

$$\begin{aligned} {\hat{{\mathcal {D}}}}=\{ x\in {\mathcal {D}} \mid \Vert x-{\bar{w}}\Vert ^{2}\ge r \}. \end{aligned}$$
(10)

We consider the following Cauchy problem

$$\begin{aligned}&{\dot{x}}(t)=F(x(t)),\quad t\ge t_0 \ge 0,\\&x(t_0)=x_{00}\in {\hat{{\mathcal {D}}}}\setminus \{ {\bar{z}} \}, \end{aligned}$$
(DS-0)

where \(F: {\hat{{\mathcal {D}}}} \rightarrow \mathcal {X}\) is a continuous function on \({\hat{{\mathcal {D}}}}\) and locally Lipschitz on \({\hat{{\mathcal {D}}}}\setminus \{ {\bar{z}} \}\) and bounded on \({\hat{{\mathcal {D}}}}\) (\(\Vert F(x)\Vert \le M\), \(M>0\), \(x\in {\hat{{\mathcal {D}}}}\)).

Moreover, we assume:

  1. (A)

    \({\bar{z}}\) is the only zero point of F in \({\hat{{\mathcal {D}}}}\), i.e. \(F(x)=0\) iff \(x={\bar{z}}\).

  2. (B)

    for all \(x\in {\hat{{\mathcal {D}}}}\), for all \(h\in [0,1]\) we have \(x+hF(x)\in {\hat{{\mathcal {D}}}}\).

Together with assumptions (A), (B) we also consider the following assumption related to the behaviour of projectionFootnote 1:

  1. (C)

    \(\langle F(x) | {\bar{w}} - x \rangle \le 0\) for all \(x\in {\hat{{\mathcal {D}}}}\).

Fig. 1
figure 1

Illustration of the considered sets

Remark 1

The motivation for considering a nonconvex set \({\hat{{\mathcal {D}}}}\) comes from the following observation. Consider \(F: {\mathcal {D}} \rightarrow \mathcal {X}\) defined as

$$\begin{aligned} F(x)=P_{{\mathbb {C}}(x)}({\bar{w}}), \end{aligned}$$
(11)

where \(P_{{\mathbb {C}}(x)}({\bar{w}})\) is the projection of \({\bar{w}}\) onto \({\mathbb {C}}(x)\), \({\mathbb {C}}: {\mathcal {D}} \rightrightarrows \mathcal {X}\) is a multifunction given by \({\mathbb {C}}(x)=H({\bar{w}},x)\cap H(x,g(x))\) (see formula (2) for \(H(\cdot ,\cdot )\)) and \(g: \mathcal {X}\rightarrow \mathcal {X}\) satisfies \({\bar{z}}\in H(x,g(x))\) for all \(x\in \mathcal {X}\). Under a suitable assumption on g, the function F given by (11) is locally Lipschitz on \({\mathcal {D}}\setminus \{{\bar{w}},{\bar{z}}\}\) (see e.g. [7]), continuous on \({\mathcal {D}}\setminus \{{\bar{w}}\}\) and bounded on \({\mathcal {D}}\).

Throughout the paper we use the following concept of solutions for dynamical systems (DS-0) and (DS) and its extendibility.

Definition 1

Let

$$\begin{aligned} {{\mathcal {T}}}=[t_{0},T), \ t_{0}<T\le +\infty \quad \text {or }\quad {{\mathcal {T}}}=[t_{0},T],\ t_{0}<T<+\infty . \end{aligned}$$

Solution of

$$\begin{aligned}&{\dot{x}}(t)=F(x(t)),\quad t\ge t_0 \ge 0,\\&x(t_0)=x_{00}\in A\setminus \{ {\bar{z}} \}, \end{aligned}$$
(DS-A)

where \(F:\ A \rightarrow \mathcal {X}\), \(A\subseteq \mathcal {X}\), on interval \({{\mathcal {T}}}\) is any function

$$\begin{aligned} x(\cdot )\in C^{1}({{\mathcal {T}}},A) \end{aligned}$$

satisfying

  1. 1.

    initial condition \(x(t_{0})=x_{0}\);

  2. 2.

    equation \({\dot{x}}(t)=F(x(t))\) for all \(t\in {{\mathcal {T}}}\), where the differentiation is understood in the sense of strong derivative on space \(\mathcal {X}\) , where at the boundary point of the interval \({{\mathcal {T}}}\), in the case when it belongs to \({{\mathcal {T}}}\), the differentiation is understood in the one-sided way.

Definition 2

A solution x(t) to problem (DS). on interval \({\mathcal {T}}_1=[0,T]\) (or \({\mathcal {T}}_1=[0,T)\)) is called non-extendable if there is no solution \(x_2(\cdot )\in C^1({\mathcal {T}}_2,{\hat{{\mathcal {D}}}})\) on any interval \({\mathcal {T}}_2\) of this problem satisfying conditions:

  1. 1.

    \({\mathcal {T}}_2\supsetneq {\mathcal {T}}_1\);

  2. 2.

    \(\forall t\in {\mathcal {T}}_1,\quad x_{2}(t)=x(t)\).

Remark 2

If x(t) is a solution of Cauchy problem (DS-0) on interval \({\mathcal {T}}=[0,T]\) (or \({\mathcal {T}}=[0,T)\)), then restriction of x(t) on any interval \({\mathcal {T}}_1=[t_{0},t_{1}]\subset {\mathcal {T}}\) (or \({\mathcal {T}}_1=[t_{0},t_{1})\subset {\mathcal {T}}\)) is a solution of Cauchy problem (DS-0) on \({{\mathcal {T}}}_1\) with initial condition \(x_0=x(t_0)\).

The main results on the existence, uniqueness and extendibility of solutions to (DS) read as follows.

Theorem 1

(Existence and uniqueness) Suppose that assumptions (A), (B) and (C) hold. There exists a unique solution of (DS-0) on \([t_{0},+\infty )\).

Theorem 2

(Behavior at \(+\infty \)) Let x(t) be a solution of (DS-0) on \([t_{0},+\infty )\). Assume that

for every increasing sequence \(\{t_n\}_{n\in {\mathbb {N}}}\), \(t_n\rightarrow +\infty \)

$$\begin{aligned} x(t_{n})\rightharpoonup {\tilde{x}} \implies {\tilde{x}}={\bar{z}}, \end{aligned}$$
(12)

where x(t) is a unique solution of (DS-0).

Then the trajectory x(t) satisfies the condition \(\lim _{t\rightarrow +\infty } x(t)={\bar{z}}\), where convergence is understood in the sense of the norm of \(\mathcal {X}\).

Remark 3

Condition (12) can be seen as a continuous analogue of condition (iv) of Proposition 2.1 of [2]. Namely, to obtain the strong convergence of the sequence generated by (3) it is assumed in Proposition 2.1 of [2] that for any strictly increasing sequence \(\{k_{n}\}\subset {\mathbb {N}}\) the following implication holds:

$$\begin{aligned} x_{k_{n}}\rightharpoonup {\tilde{x}} \implies {\tilde{x}}={\bar{z}}. \end{aligned}$$

4 Solutions to (DS-0) on Closed Intervals

In this section we consider the existence and uniqueness of solutions to (DS-0) defined on closed intervals, namely, \([t_{0},T]\), where \(T>t_{0}\) is finite. In deriving existence and uniqueness results, we modify two standard approaches (with the help of assumptions (A)–(C)): Euler method (Sect. 4.1) and contraction mapping principle (Sect. 4.2). To this aim we will use the following proposition.

Proposition 1

Assume that (C) holds. Then any solution x(t) of (DS-0) satisfies the condition

$$\begin{aligned} \Vert x(t)-{\bar{w}}\Vert \quad \text {is nondecreasing with respect to } t\ge t_{0}. \end{aligned}$$

Proof

Let us note that x(t) is continuously differentiable on \( [t_{0},+\infty ) \), therefore by (C) we have

$$\begin{aligned} \frac{1}{2} \frac{d}{dt} \Vert x(t)-{\bar{w}}\Vert ^2&= \langle {\dot{x}}(t) \mid x(t)-{\bar{w}}\rangle \\&= \langle F(x(t)) \mid x(t)-{\bar{w}}\rangle \ge 0. \end{aligned}$$

\(\square \)

Now we show the uniqueness of trajectories.

Proposition 2

Let \(t_{0}\ge 0\) and let \(x_{0}\in {\hat{{\mathcal {D}}}}\setminus \{ {\bar{z}} \}\). Assume that assumptions (A) and (C) holds. If (DS-0) is solvable in a given interval \([t_0,T]\), then the solution is unique on this interval.

Proof

Now we show the uniqueness of solutions of (DS-0) on \([t_{0},T]\). Suppose that \(x_{1}(\cdot )\) and \(x_{2}(\cdot )\) solve (DS-0) on interval \([t_0,T]\). Let \({\bar{t}}\in [t_{0},T]\) be such that

$$\begin{aligned} {\bar{t}}{:}{=}\sup \{t\in [t_{0},T] \mid \Vert x_{1}(t)- x_{2}(t)\Vert = 0\}. \end{aligned}$$
(13)

Let us note that \(x_{1}(t_{0})=x_{00}=x_{2}(t_{0})\). Consider two cases:

  1. Case 1

    : \(x_{1}({\bar{t}})=x_{2}({\bar{t}})={\bar{z}}\). Then, by Proposition 1, \(\Vert x_{1}(t)-{\bar{w}}\Vert \ge \Vert {\bar{z}}-{\bar{w}}\Vert \) and \(\Vert x_{2}(t)-{\bar{w}}\Vert \ge \Vert {\bar{z}}-{\bar{w}}\Vert \) for \(t\ge {\bar{t}}\). However, since \({\bar{z}}\in {\hat{\mathcal {D}}}\) and, by (8), \({\hat{\mathcal {D}}}\subset {\mathcal {D}} \subset \bar{{\mathbb {B}}}(\frac{{\bar{w}}+{\bar{z}}}{2},\frac{\Vert {\bar{w}}-{\bar{z}}\Vert }{2})\),

    $$\begin{aligned} \{ x\in \mathcal {X}\mid \Vert x-{\bar{w}}\Vert \ge \Vert {\bar{w}}-{\bar{z}}\Vert \} \cap {\hat{{\mathcal {D}}}}= \{{\bar{z}}\}. \end{aligned}$$

    Therefore, by assumption (A), \(x_{1}(t)=x_{2}(t)={\bar{z}}\) for all \(t\in [{\bar{t}},T]\).

  2. Case 2

    : \(x_1({\bar{t}})=x_2({\bar{t}})\ne {\bar{z}}\) and \({\bar{t}}< T\). Then, by the local Lipschitzness of \(F(\cdot )\) on \({\hat{{\mathcal {D}}}}\setminus \{{\bar{z}}\}\) there exists a neighbourhood of \(x_1({\bar{t}})\), namely \(U(x_1({\bar{t}}))\) such that F is locally Lipschitz in \(U(x_1({\bar{t}}))\) with some constant \(L_{x_1({\bar{t}})}\), i.e.,

    $$\begin{aligned} \forall x^1,x^2 \in U(x_1({\bar{t}})) \quad \Vert F(x^1)-F(x^2)\Vert \le L_{x_1({\bar{t}})} \Vert x^1-x^2\Vert . \end{aligned}$$

    Since \(x_1\) and \(x_2\) are Lipschitz functions with constant M there exists a neighbourhood \(V({\bar{t}})\cap [t_0,T]\) such that

    $$\begin{aligned} \forall t\in V({\bar{t}})\cap [t_0,T] \quad x_1(t)\in U(x_1({\bar{t}})) \quad \wedge \quad x_2(t)\in U(x_1({\bar{t}})). \end{aligned}$$

    Then for \(t\in V({\bar{t}})\cap [t_0,T]\)

    $$\begin{aligned}&\frac{d}{dt}\left( \frac{1}{2}\Vert x_{1}(t)-x_{2}(t)\Vert ^{2}\right) =\langle {\dot{x}}_{1}(t)-{\dot{x}}_{2}(t) | x_{1}(t)-x_{2}(t)\rangle \\&= \langle F(x_{1}(t))-F(x_{2}(t)) | x_{1}(t)-x_{2}(t)\rangle \le L_{x_{1}({\bar{t}})}\Vert x_{1}(t)-x_{2}(t)\Vert ^{2}. \end{aligned}$$

    By using Gronwall’s inequality for the function \(t\rightarrow \Vert x_{1}(t)-x_{2}(t)\Vert ^{2}\) we obtain that \(\Vert x_{1}(t)-x_{2}(t)\Vert ^{2}\le 0\), i.e., \(x_{1}(t)=x_{2}(t)\) for \(t\in V({\bar{t}})\cap [t_{0},T]\). This contradicts 13 with \({\bar{t}}\ne T\).

\(\square \)

Proposition 3

x(t) is a solution of (DS-A) on \({\mathcal {I}}=[t_0,T]\) (\(T>t_0\) is arbitrary) if and only if it satisfies the condition

$$\begin{aligned} x(t)=x_0+\int _{t_0}^{t} F(x(s))\, ds,\quad \forall t\in {\mathcal {I}}, \end{aligned}$$
(14)

where the integral is understood in the sense of Riemann and \(x(t)\in {\hat{{\mathcal {D}}}}\), \(t\in {\mathcal {I}}\).

Let us define

$$\begin{aligned} {\mathbb {B}}_{t_0,T}{:}{=} C([t_0,t_0+T]; \hat{\mathcal {D}} ) \end{aligned}$$

and

$$\begin{aligned} {\mathbb {B}}_{t_0,x_0,T}^R{:}{=} \{ x\in {\mathbb {B}}_{t_0,T} \mid \sup _{t\in [t_0,t_0+T]} \Vert x(t)-x_0\Vert \le R \}. \end{aligned}$$

Let us note that \({\mathbb {B}}_{t_0,T}\) is a complete metric space due to the fact that \({\hat{{\mathcal {D}}}}\) is a closed subset of a Hilbert space \(\mathcal {X}\). Moreover, in the sequel we consider on \({\hat{{\mathcal {D}}}}\) the topology induced by the topology of the space.

4.1 Euler Method

We start with the following construction of Euler trajectories.

For any \(\uplambda \in (0,1]\) define \(c_n^{\uplambda }\), \(n=0,1,\dots \) as follows

$$\begin{aligned} c_0^{\uplambda }{:}{=}x_0, \ c_{n+1}^{\uplambda }{:}{=}c_{n}^{\uplambda }+\uplambda F(c_{n}^{\uplambda }), \ n=0,1,\dots . \end{aligned}$$
(15)

Then, for any \(\uplambda \in (0,1]\) define a continuous trajectory on \([t_0,T]\) as follows

$$\begin{aligned} \begin{aligned} c_{\uplambda (t)}=c_{n}^{\uplambda }+(t-t_{0}-n\uplambda )F(c_{n}^{\uplambda }),\quad&t\in [t_{0}+n{\uplambda },t_{0}+(n+1)\uplambda ], \\&n=0,1,\dots . \end{aligned} \end{aligned}$$
(16)

Proposition 4

Let \(t_0>0\) and let \(x_0\in {\hat{{\mathcal {D}}}}\setminus \{ {\bar{z}} \}\). Assume that (B) hold.

  1. 1.

    If \(\mathcal {X}\) is finite-dimensional, then for all \(T>t_0\) there exists a solution of x(t) of (DS-0) on \([t_0,T]\) in the class \({\mathbb {B}}_{t_0,T}\),

  2. 2.

    If \(\mathcal {X}\) is infinite-dimensional, then there exists \(R>0\) and \(T>t_0\) such that there exists a solution of x(t) of (DS-0) on \([t_0,T]\) in the class \({\mathbb {B}}_{t_0,x_0,T}^R\),

Proof

Let us start with the initial settings.

  1. 1.

    In case \(\mathcal {X}\) is finite-dimensional we take any \(T>t_0\). Let us note that in this case \({\hat{{\mathcal {D}}}}\) is closed and bounded, hence compact. Since F is continuous on \({\hat{{\mathcal {D}}}}\), F is uniformly continuous, i.e.

    $$\begin{aligned} \forall \varepsilon>0\ \exists \delta >0\ \forall x_1,x_2\in {\hat{{\mathcal {D}}}} \quad \Vert x_{1}-x_{2}\Vert< \delta \implies \Vert F(x_{1})-F(x_{2})\Vert < \varepsilon . \end{aligned}$$
  2. 2.

    In case \(\mathcal {X}\) is infinite-dimensional let \(T=\frac{R}{M}+t_0\), where R is such that \(F(\cdot )\) is Lipschitz on \(B(x_0,R)\). Let \(m_\uplambda {:}{=}\lceil (T-t_0)\uplambda ^{-1}\rceil \). Let us note that, by the fact that \(x_0\in {\hat{{\mathcal {D}}}}\setminus \{{\bar{z}}\}\) and, by assumption (B), for any \(\uplambda \in (0,1]\) and all \(t\in [t_0,T]\), \(c_\uplambda (t)\in {\hat{{\mathcal {D}}}}\). For any \(\uplambda \in (0,1]\) function \(c_\uplambda (\cdot )\) given by (16) is differentiable on \([t_0,T]\setminus \{t_{0},t_{0}+\uplambda ,\dots , t_0+m_{\uplambda } \uplambda \}\) as a piecewise affine function. For all \(\uplambda \in (0,1]\) and any \(t\in [t_0,T]\) (\(t=t_0+a\uplambda +{\tilde{t}}\), \(a\in {\mathbb {N}}\), \(0\le {\tilde{t}}<\uplambda \)) we have

    $$\begin{aligned}&\Vert c_\uplambda (t)-x_0\Vert = \Vert x_0 + \uplambda \sum _{n=0}^{a-1} F(c_{\uplambda }(t_0+n\uplambda ))+{\tilde{t}}F(c_{\uplambda }(t_0+a\uplambda )) - x_0\Vert \\&\le \uplambda \sum _{i=0}^{a-1} M + {\tilde{t}} M= M(a\uplambda +{\tilde{t}})\le M (T-t_0) =\frac{R}{M} M = R. \end{aligned}$$

    Let us note that in this case F is uniformly continuous on \(B(x_0,R)\cap {\hat{{\mathcal {D}}}}\), i.e.

    $$\begin{aligned} \forall \varepsilon>0\ \exists \delta >0\ \forall x_1,x_2\in B(x_0,R)\cap {\hat{{\mathcal {D}}}} \quad \Vert x_1-x_2\Vert< \delta \implies \Vert F(x_1)-F(x_2)\Vert < \varepsilon . \end{aligned}$$

Now let us continue the proof in both cases 1. and 2. together. For any \(\uplambda \in (0,1]\) define

$$\begin{aligned} \varDelta _\uplambda (t){:}{=}\left\{ \begin{array}{ll} {\dot{c}}_\uplambda (t) - F(c_\uplambda (t)),\quad &{} t_0+n\uplambda<t<\min \{t_0+(n+1)\uplambda ,T\},\\ &{} n=0,1\dots ,m_\uplambda , \\ 0 &{} t= t_0,t_0+ \uplambda ,\dots ,t_0+m_\uplambda \uplambda . \end{array} \right. \end{aligned}$$
(17)

Note that for all \(t\in [t_0,T]\),

$$\begin{aligned} c_\uplambda (t)=x_0+\int _{t_0}^{t} {\dot{c}}_\uplambda (s)\, ds = x_0+\int _{t_0}^{t} F(c_\uplambda (s)) + \varDelta _\uplambda (s) \, ds. \end{aligned}$$

We have

$$\begin{aligned} \Vert \varDelta _\uplambda (t)\Vert =\Vert F(c_n^\uplambda )-F(c_\uplambda (t))\Vert ,\quad&t\in [t_0+n\uplambda ,\min \{t_0+(n+1)\uplambda ,T\}],\\&n=0,1,\dots ,m_\uplambda . \end{aligned}$$

Let us note that \(c_\uplambda (\cdot )\) is Lipschitz continuous on \([t_0,T]\) because it is differentiable almost everywhere and the norm of its derivative is bounded by M. Therefore

$$\begin{aligned}&\forall n=0,1\dots ,m_\uplambda \\&\sup _{t\in [t_0+n\uplambda ,\min \{ t_0+(n+1)\uplambda ,T\}]} \Vert c_\uplambda (t_0+ n\uplambda )-c_k(t)\Vert \le M |n\uplambda -t|\le M\uplambda \end{aligned}$$

Fix any \(\varepsilon >0\) and take \(\uplambda \in (0,1]\) such that \( M \uplambda <\delta \). Then for all \(n=0,\dots ,m_\uplambda \) we have

$$\begin{aligned}&\sup _{t\in [t_0+n\uplambda ,\min \{ t_0+(n+1)\uplambda ,T\}]} \Vert c_\uplambda (t_0+n\uplambda )-c_\uplambda (t)\Vert \\&=\sup _{t\in [t_0+n\uplambda ,\min \{ t_0+(n+1)\uplambda ,T\}]} \Vert (t-(t_0+n\uplambda ))F(c_n^\uplambda ) \Vert \le M\uplambda < \delta , \end{aligned}$$

and consequently

$$\begin{aligned}&\forall n=0,\dots ,m_\uplambda \\&\sup _{t\in [t_0+n\uplambda ,\min \{ t_0+(n+1)\uplambda ,T\}]} \Vert F(c_n^\uplambda )-F(c_\uplambda (t)\Vert <\varepsilon . \end{aligned}$$

Hence, for all \(\uplambda < \frac{\delta }{M}\), we have

$$\begin{aligned} \forall n=0,\dots ,m_\uplambda \forall t\in [t_0+n\uplambda ,\min \{ t_0+(n+1)\uplambda ,T\}]\quad \Vert \varDelta _\uplambda (t)\Vert <\varepsilon . \end{aligned}$$

Thus,

$$\begin{aligned} \Vert \varDelta _\uplambda (\cdot )\Vert _{+\infty }\rightarrow 0 \text { as } \uplambda \rightarrow 0 \text { on } [t_0,T]. \end{aligned}$$

Let \(\{\uplambda _k\}_{k\in {\mathbb {N}}}\) be a sequence in (0, 1] such that \(\uplambda _{k}\rightarrow 0\) as \(k\rightarrow 0\). By the Ascoli-Arzela Theorem, there exists a uniformly convergent subsequence of \(\{ c_{\uplambda _k}(t) \}_{k\in {\mathbb {N}}}\), namely \(\{ c_{\uplambda _{k_i}}(t) \}_{i\in {\mathbb {N}}}\), which converges to \(x(t)=\lim _{i\rightarrow +\infty } c_{\uplambda _{k_i}}(t)\) for \(t\in [t_0,T]\), i.e.

$$\begin{aligned} \begin{aligned}&\exists \{\uplambda _{k_i}\}_{i\in {\mathbb {N}}} \,\forall \varepsilon >0~ \exists \, i_0\in {\mathbb {N}}\ \forall i\ge i_{0} \,\forall t\in [t_0,T] \\ {}&\Vert x(t)-c_{\uplambda _{k_i}}(t)\Vert <\varepsilon . \end{aligned} \end{aligned}$$
(18)

Therefore, for all \(t\in [t_0,T]\),

$$\begin{aligned} c_{\uplambda _{k_i}}(t)=x_0+\int _{t_0}^{t} F(c_{\uplambda _{k_i}}(s)) + \varDelta _{\uplambda _{k_i}}(s) ds, ~~~x(t)=x_0+\int _{t_0}^{t} F(x(s)) ds. \end{aligned}$$

By Proposition 3, x(t) is a solution of (DS-0) on \([t_0,T]\). Since \(c_{\uplambda _{k_i}}(t)\in \hat{\mathcal {D}}\), \(i\in {\mathbb {N}}\), \(t\in [t_0,T]\), by the closedness of \({\hat{{\mathcal {D}}}}\), we obtain that \(x(t)\in \hat{\mathcal {D}}\), \([t_0,T]\). \(\square \)

Corollary 2

Let \(t_0\ge 0\), \(x_0\in {\hat{{\mathcal {D}}}}\setminus \{ {\bar{z}}\}\) be arbitrary fixed. Assume that assumptions (A), (B), (C) are satisfied. Then there exist \(R>0\), \(T^{\prime }>0\) such that for all \(T\in [t_0,T^\prime )\) there exists solution to (DS-0) on \([t_0,t_0+T]\) and it is unique in the class \({\mathbb {B}}_{t_0,x_0,T}^R\).

Proof

The proof follows from Propositions 2 and 4.\(\square \)

4.2 Contraction Mapping Principle for an Extended Vector Field F

We consider the following Cauchy problem

$$\begin{aligned}&{\dot{x}}(t)={\tilde{F}}(x(t)),\quad t\ge 0,\\&x(0)=x_{00}\in {\hat{{\mathcal {D}}}}\setminus \{ {\bar{z}} \}, \end{aligned}$$
(DS-1)

where \({\tilde{F}}: \mathcal {X}\rightarrow \mathcal {X}\) is such that \({\tilde{F}}(x)=F(x)\) for all \(x\in \hat{\mathcal {D}}\) and \({\tilde{F}}\) is continuous on \(\mathcal {X}\).

Lemma 1

([16, Lemma 1.2]) Let XY be Banach spaces, \(\Omega \subset X\) closed and \(f:\Omega \rightarrow Y\) continuous. Then there is a continuous extension \({\tilde{f}}:X\rightarrow Y\) of f such that \({\tilde{f}}(X)\subset \mathrm{conv}f(\Omega )\) (:=convex hull of \(f(\Omega )\)).

Proposition 5

Let \(t_0\ge 0\), \(x_0\in {\hat{{\mathcal {D}}}}\setminus \{ {\bar{z}} \}\). Then there exists \(T>0\) such that there exists a solution of (DS-1) on interval \([t_0,t_0+T]\).

Proof

For a given \(x\in C([t_0,t_0+T];\mathcal {X})\), define S[x] to be the function on \([t_0,t_0+T]\), given by

$$\begin{aligned} S[x](t){:}{=} x_0+\int _{t_0}^{t}{\tilde{F}}(x(\tau ))\,d\tau ,\quad t\in [t_0,t_0+T], \end{aligned}$$
(19)

where \({\tilde{F}}\) is an extension of F given by Lemma 1. In the following, the boundedness of F or \({\tilde{F}}\) will be used as per their restrictive sense.

  1. Step 1.

    If \(x\in C([t_0,t_0+T];\hat{\mathcal {D}})\), then S(x) makes sense, since the right hand side is well defined.

  2. Step 2.

    Let us prove that \(S[x](\cdot )\in C([t_0,t_0+T];\mathcal {X})\) for any \(T>0\) and for \(x\in C([t_0,t_0+T];\mathcal {X})\). Assume \(t_1,t_2\in [t_0,t_0+T]\) with \(t_1<t_2\). It is evident that

    $$\begin{aligned} S[x](t_2)= S[x](t_1)+\int _{t_1}^{t_2}{\tilde{F}}(x(\tau ))\,d\tau . \end{aligned}$$
    (20)

    Then the continuity of S[x] gives us as \(t_2\rightarrow t_1\),

    $$\begin{aligned} \Vert S[x](t_2)-S[x](t_1)\Vert _{\mathcal {X}}=\left\| \int _{t_1}^{t_2}{\tilde{F}}(x(\tau ))\,d\tau \right\| _{\mathcal {X}}\le \max _{\tau \in [t_{0},t_{0+T}]}\Vert {\tilde{F}}(x(\tau ))\Vert _{\mathcal {X}}\cdot |t_{2}-t_{1}|. \end{aligned}$$

    Thus \(S:C([t_0,t_0+T];\mathcal {X})\longrightarrow C([t_0,t_0+T];\mathcal {X})\).

  3. Step 3.

    Denote \(C_0{:}{=}C([t_0,t_0+T];\mathcal {X})\). Consider the following form of a ball in \(C_0\), where we intend to look for a fixed point.

    $$\begin{aligned} C_{0D}{:}{=}\left\{ x(t)\in C_0\mid \quad |x-x_0|_{C_{0}}\equiv \max _{t\in [t_0,t_0+T]}\Vert x(t)-x_0 \Vert _\mathcal {X}\le 1/2,\ x_0\in \hat{\mathcal {D}} \right\} . \end{aligned}$$

    Clearly, \(C_{0D}(\subseteq C_0)\) is a complete metric space with the metric induced by the norm of \(C_0\). Let us show that for choosing T small enough the operator S maps \(C_{0D}\) into itself and has a fixed point. We have, by Step 2, \(S[x](\cdot )\in C_0\), whenever \(x(\cdot )\in C_{0D}\). We now show that \(S[x](\cdot )\in C_{0D}\). It follows from (20) that

    $$\begin{aligned} |S[x]-x_0|_{C_{0}}=\max _{t\in [t_0,t_0+T]}\Vert S[x](t)-x_0\Vert _{\mathcal {X}}=\max _{t\in [t_0,t_0+T]}\left\| \int _{0}^{t}\left( {\tilde{F}}(x(\tau ))\right) \,d\tau \right\| _\mathcal {X}\\ \le \max _{\tau \in [t_0,t_0+T]}\Vert {\tilde{F}}(x(\tau ))\Vert _{\mathcal {X}} T{=}{:} cT. \end{aligned}$$

    Therefore, for a choice of \(T\le 1/2c\),

    $$\begin{aligned} |S[x]-x_0|_{C_{0}}\le 1/2. \end{aligned}$$

    Hence, \(S[x](\cdot )\in C_{0D}\) implies \(S:C_{0D}\rightarrow C_{0D}\) for every \(T\le 1/2c\).

  4. Step 4

    We shall show now that a sequence \(\{ x_n(\cdot )\}_{n\ge 1}\subseteq C_{0D}\) is a Cauchy sequence. Lets start with the initial point \(\{x_0\}\in \hat{\mathcal {D}}\) be given and define \(x_0(\cdot ){:}{=}x_0\). Denote \(x_1(\cdot ){:}{=}S[x_0](\cdot )\), and that \(x_{n+1}(\cdot ){:}{=}S[x_n](\cdot ),\ n=1,2,\dots \). Moreover, the followings hold successively.

    $$\begin{aligned}&|x_{n+1}-x_{n}|_{C_{0}}=|S[x_n]-S[x_{n-1}]|_{C_{0}}\le cT|x_n - x_{n-1}|_{C_{0}}\\&\le \cdots \le (cT)^{n}|x_1-x_0|_{C_{0}}=(cT)^{n}\max _{t\in [0,T]}\Vert x_1(t)-x_0\Vert _{\mathcal {X}}\le c^n T^{n+1}\Vert F(x_{0})\Vert _{\mathcal {X}}. \end{aligned}$$

    Let \(m, n\in {\mathbb {N}}\) such that \(m > n\) and \(cT=\delta \in [0,1]\) then

    $$\begin{aligned}&|x_m - x_n|_{C_{0}}\le |x_m - x_{m-1}|_{C_{0}}+|x_{m-1} - x_{m-2}|_{C_{0}}+\cdots +|x_{n+1}-x_n|_{C_{0}}\\&\le (\delta ^{m}+\delta ^{m-1}+\cdots +\delta ^{n+1})\frac{\Vert F(x_{0})\Vert _{\mathcal {X}}}{c} = \delta ^{n+1}\frac{\Vert F(x_{0})\Vert _{\mathcal {X}}}{c}\sum _{k=0}^{m-n}\delta ^k\\&\le \delta ^{n+1}\frac{\Vert F(x_{0})\Vert _{\mathcal {X}}}{c}\sum _{k=0}^{\infty }\delta ^k {\mathop {=}\limits ^{\delta <1}} \frac{\delta ^{n+1}\Vert F(x_{0})\Vert _{\mathcal {X}}}{c(1-\delta )}. \end{aligned}$$

    Let \(\varepsilon >0\). Moreover, since \(\delta \in [0, 1)\), we can find a large number \(N \in {\mathbb {N}}\) so that

    $$\begin{aligned} \delta ^{N+1}< \varepsilon c(1-\delta )/\Vert F(x_{0})\Vert _{\mathcal {X}}. \end{aligned}$$

    Therefore, for \(m,n>N\in {\mathbb {N}}\),

    $$\begin{aligned} |x_{m} - x_{n}|_{C_{0}}\le \varepsilon . \end{aligned}$$

    Hence, we have that the sequence \(\{x_n(\cdot )\}_{n\ge 1}\subseteq C_{0D}\) is Cauchy. Therefore, \(\{x_n(\cdot )\}_{n\ge 1}\) converges to some \({\bar{x}}(\cdot )\subseteq C_{0D}\), where \({\bar{x}}(\cdot )\) satisfies

    $$\begin{aligned} {\bar{x}}(t)= x_{0}+\int _{0}^{t}{\tilde{F}}({\bar{x}}(\tau ))\,d\tau ,\quad \forall t\in [t_0,t_0+T]. \end{aligned}$$
    (21)

    By Proposition 3, \({\bar{x}}(\cdot )\) is a solution of (DS-1) for \(t\in [t_0,t_0+T]\).

\(\square \)

Remark 4

The proof of the above proposition will not work in the formulation of \({\tilde{F}}\) defined only on set \({\hat{{\mathcal {D}}}}\). This comes from the fact that the operator

$$\begin{aligned} S[x](\cdot ): {\hat{{\mathcal {D}}}} \rightarrow C([t_{0},t_{0}+T];\mathcal {X}) \end{aligned}$$

may map a function \(x(\cdot )\) outside of \({\hat{{\mathcal {D}}}}\) for which we cannot apply Step 4. in the proof. However, in the case when \(x_0\in \text {int}\, {\hat{{\mathcal {D}}}}\), the following corollary holds.

Corollary 3

We have the following relationships between (DS) and (DS-1):

  1. 1.

    if \(x_0\in \text {int} {\hat{{\mathcal {D}}}}\), then there exists a function \(x(\cdot )\in C^1([t_0,t_0+T];{\hat{{\mathcal {D}}}})\), which is a unique solution of (DS) and (DS-1) on \([t_0,t_0+T]\) for some \(T>0\);

  2. 2.

    if \(x_0\in \partial {\hat{{\mathcal {D}}}}\) and assumption (B) holds, then the solution of (DS) is unique on \([t_{0},t_{0}+T_{1}]\) for some \(T_{1}>0\) and the solution of (DS-1) exists on \([t_{0},t_{0}+T_{2}]\) for some \(T_{2}>0\).

Proof

The proof will follow the lines of the proof of Proposition 5 up to Step 3 by replacing \({\tilde{F}}\) with F and then we proceed as follows.

We consider the following two cases.

  1. Case 1.

    Suppose \(x_0\in \hat{\mathcal {D}}\) such that \(\rho {:}{=}\inf \limits _{y\in \partial \hat{\mathcal {D}}}\Vert x_{0}-y\Vert _{H}{=}{:}\mathrm{dist}(x_{0},\partial \hat{\mathcal {D}})>0\).

  2. Case 2.

    Suppose \(x_0\in \hat{\mathcal {D}}\) such that \(\rho =0\). Then one can follow the proof of Proposition 5.

We look for a solution to (DS) for Case 1. Let us consider

$$\begin{aligned} C_{0D}{:}{=}\left\{ x(t)\in C_0\mid \quad |x-x_0|_{C_{0}}\equiv \max _{t\in [t_0,t_0+T]}\Vert x(t)-x_0 \Vert _\mathcal {X}\le \rho /2,\ x_0\in \hat{\mathcal {D}} \right\} . \end{aligned}$$

Given the fact that \(x_0\in \hat{\mathcal {D}}{:}{=}\{ x\in {\mathcal {D}} \mid \Vert x-{\bar{w}}\Vert ^2\ge r >0 \}\), it implies \(\rho {:}{=}||x_0 -{\bar{w}}||>0\). Let us consider the following two possible cases for fixed \(r>0\),

  1. (i)

    if \(\rho >2r\), then consider the ball \(B_{\frac{\rho }{2}}(x_0)\subset \hat{\mathcal {D}}\);

  2. (ii)

    if \(\rho <2r\), then consider the ball \(B_{r-\frac{\rho }{2}}(x_0)\subset \hat{\mathcal {D}}\).

Thereafter, as in Step 4 of Proposition 5 we show the existence of Cauchy sequence in \(C_{0D}\).

  1. Step 5.

    Moreover, \(C_{0D}\) is a closed subset of \(C_0\). Indeed, it is an implication of the facts of continuity of S and

    $$\begin{aligned} x_n\in {\mathcal {D}}\Rightarrow \lim \limits _{n\rightarrow \infty }x_n{=}{:}\hat{x}\in {\mathcal {D}}, \text {since\,\,} {\mathcal {D}} ~\text {is closed in}\ H. \end{aligned}$$
  2. Step 6.

    Finally, \({\mathcal {D}}\ni \hat{x}\) must be a fixed point of \(S:C_{0D}\rightarrow C_{0D}\). Indeed,

    $$\begin{aligned} \hat{x}=\lim \limits _{n\rightarrow \infty }x_n=\lim \limits _{n\rightarrow \infty }S[x_{n-1}]{\mathop {=}\limits ^{\text {continuity of} ~S}}S\left[ \lim \limits _{n\rightarrow \infty }x_{n-1}\right] =S[{\hat{x}}]. \end{aligned}$$

Hence, we reach at the solution to (DS).\(\square \)

In the following example we show that the existence of solutions of (DS) is not guaranteed without assumption (B), however there are still solutions of (DS-1) due to Proposition 5.

Example 1

Let \(\mathcal {X}={\mathbb {R}}^2\), \({\bar{w}}=(-1,0)\), \({\bar{z}}=(1,0)\), \({\hat{{\mathcal {D}}}}={\bar{B}}((0,0),1)\setminus B((-1,0),1)\) and let \(F: {\hat{{\mathcal {D}}}}\rightarrow \mathcal {X}\) be defined as

$$\begin{aligned} F((x_{1},x_{2}))=(1-x_{1},0),\quad (x_{1},x_{2})\in {\hat{{\mathcal {D}}}} \end{aligned}$$

Then assumption (A) and (C) is satisfied. Consider \(x_0=x(0)=(0,-1)\). Then there is no solution of (DS). By extending F(x) in a continuous way:

$$\begin{aligned} {\tilde{F}}((x_{1},x_{2}))=(1-x_{1,0}),\quad (x_{1},x_{2})\in \mathcal {X}, \end{aligned}$$

we obtain that one solution of (DS-1) is \(x(t)=(1-e^{-t},-1)\).

The following example shows that by considering (DS-1) under assumption (B) we may loose the uniqueness of solutions in the sense of Definition 1.

Example 2

Let \(\mathcal {X}={\mathbb {R}}^2\), \({\bar{w}}=(0,-1)\), \({\bar{z}}=(1,0)\), \({\hat{{\mathcal {D}}}}=[0,1]\times [-1,0]\setminus B((0,-1),1)\) and let \(F: {\hat{{\mathcal {D}}}}\rightarrow \mathcal {X}\) be defined as

$$\begin{aligned} F((x_{1},x_{2}))=(1-x_{1},0-x_{2}),\quad (x_{1},x_{2})\in {\hat{{\mathcal {D}}}}. \end{aligned}$$

Then assumptions (A), (B) and (C) are satisfied. Consider \(x_0=x(0)=(0,0)\). By extending F(x) in the continuous way:

$$\begin{aligned} {\tilde{F}}((x_{1},x_{2}))=\left\{ \begin{array}{ll} (1-x_{1},0-x_{2}), &{} (x_{1},x_{2})\in {\hat{{\mathcal {D}}}} , \\ (1-x_{1},x_{1}), &{} (x_{1},x_{2})\in \varGamma {:}{=}\{ (1-e^{-s},e^{-s}+s-1), s\in (0,1]\},\\ \text {continuous}, &{} \text {otherwise on } \mathcal {X}. \end{array}\right. \end{aligned}$$

We obtain that there are more solutions than one of the system (DS-1). For example:

$$\begin{aligned}&(x_{1}(t),x_{2}(t))=(1-e^{-t},e^{-t}+t-1),\quad t\in [0,1],\\&(x_{1}(t),x_{2}(t))=(1-e^{-t},0),\quad t\in [0,1]. \end{aligned}$$

5 Extendability of Solutions to (DS-0)

In this section we prove Theorem 1.

The proof is based on two lemmas, Lemmas 2 and 5 (see “Appendix”). The proposed approach follows the lines of Lecture 3 of the lecture notes [3]. The crucial assumptions are (A), (B) and (C) (see Lemma 2 below). For more general results and examples on the extendability of solutions, see e.g., [21] and the references therein.

Let

$$\begin{aligned} {{\mathcal {T}}}=[t_{0},T), \ t_{0}<T\le +\infty \quad \text {or }\quad {{\mathcal {T}}}=[t_{0},T],\ t_{0}<T<+\infty . \end{aligned}$$

As a consequence of the results of Proposition 4 and Corollary 2 we have the following ’non-branching’ result.

Lemma 2

Suppose that assumptions (A), (B) and (C) are satisfied. Let \(x_1(t)\), \(x_2(t)\) be solutions to problem (DS) in the sense of Definition 1 on \({{\mathcal {T}}}_1\), \({{\mathcal {T}}}_2\), respectively. Then one of these solutions is a prolongation of the other (in particular, they coincide if \({\mathcal {T}}_1={\mathcal {T}}_2)\).

Proof

On the contrary, suppose that

$$\begin{aligned} x_{1}(t) \not \equiv x_{2}(t) \quad \text {on} \ {\mathcal {T}}_1\cap {\mathcal {T}}_{2}. \end{aligned}$$

Consider the set

$$\begin{aligned} {\mathcal {T}}^{\ne } {:}{=} \{ t\in {\mathcal {T}}_1\cap {\mathcal {T}}_2 | x_1(t)\ne x_2(t) \}. \end{aligned}$$

Let us note that \(t_0\notin {\mathcal {T}}^{\ne }\) (by initial condition of (DS)). Furthermore, the set \({\mathcal {T}}^{\ne }\) is open in the set \({\mathcal {T}}_1\cap {\mathcal {T}}_2\), because it is an inverse image of \((t_0,+\infty )\) under continuous mapping \(t\rightarrow \Vert x_1(t)-x_2(t)\Vert \) defined on \({\mathcal {T}}_1\cap {\mathcal {T}}_2\).

Put

$$\begin{aligned} T^{*}=\inf {\mathcal {T}}^{\ne }. \end{aligned}$$

Let us note that \(T^{*}\notin {\mathcal {T}}^{\ne }\) (hence \(x_1(T^{*})=x_2(T^{*})\)). Indeed, if \(T^{*}=t_0\) then, \(t_0\notin T^{\ne }\) because \(x_1(t_0)=x_2(t_0)\).

If \(T^{*}>t_0\), then \(T^{*}\) is a boundary point of \({\mathcal {T}}^{\ne }\), so \(T^{*}\notin {\mathcal {T}}^{\ne }\) since \({\mathcal {T}}^{\ne }\) is open in \({\mathcal {T}}_1\cap {\mathcal {T}}_2\). This means that in any right-hand side half-neighbourhoodFootnote 2 of the point \(T^{*}\) there exists \(t_1>T^{*}\) such that \(t_1\in {\mathcal {T}}^{\ne }\subsetneq {\mathcal {T}}_1\cap {\mathcal {T}}_2\), and the intersection of this right-hand side half-neighbourhood with \({\mathcal {T}}^{\ne }\) is nonempty.

Take any \(\alpha >T^{*}\) and \(t_1\), \(t_1\in {\mathcal {T}}^{\ne }\cap [T^{*},\alpha )\). By Remark 2, functions \(x_1(t)\), \(x_2(t)\) are solutions to Cauchy problem

$$\begin{aligned} \left\{ \begin{array}{ll} {\dot{x}}(t)=F(x(t)), &{} t>T^{*}\\ x(T^{*})=x_1(T^{*}) \end{array}\right. \end{aligned}$$
(22)

on interval \([T^{*},t_1]\). Since \(x_1(t), x_2(t)\in {\hat{{\mathcal {D}}}}\) for all \(t\in [T^{*},t_1]\) and the set \({{\hat{{\mathcal {D}}}}}\) is bounded, we have

$$\begin{aligned} R_{12}=\max _{i=1,2} \sup _{t\in [T^{*},t_{1}]} \Vert x_i(t)-x_1(T^{*})\Vert < +\infty . \end{aligned}$$
(23)

By Corollary 2 , there exists \(T^\prime >t_0\), such that for any \(T\in (t_{0},T^\prime ]\), solution of the Cauchy problem (22) on interval \([T^{*},T^{*}+T]\) satisfying

$$\begin{aligned} \Vert x(t)-x_{1}(T^{*})\Vert \le R_{12} \end{aligned}$$
(24)

is unique. Taking \(T=\min \{ T^\prime , t_1-T^{*} \}\) we come to a contradiction with Corollary 2, because, by (23) the condition (24) holds both for \(x_1(t)\) and \(x_2(t)\), but the functions \(x_1(t)\) and \(x_2(t)\) are different in any right-hand side half-neighbourhood of \(T^{*}\). \(\square \)

Now we are ready to prove Theorem 1.

Proof of Theorem 1

By Corollary 2, there exists solution of problem (DS-0) on some interval \([t_0, T] \) (\(T>t_0\)) in the class \({\mathbb {B}}_{t_0,x_{0},T}^R\) for some \(R>0\). By Lemma 2, for any two solutions of our problem (DS-0) on different intervals, one is the prolongation of the other.

Consider now, for any \(T>t_0\), all functions from \(C^1([t_0,T],{\hat{{\mathcal {D}}}})\). Among these functions there exist solutions of problem (DS-0) or not. Put

$$\begin{aligned} \begin{aligned}&{\mathbb {T}}=\{ T>t_{0} | \exists \text { solution to } (DS-0) \text { from } C^1([t_0,T];{\hat{{\mathcal {D}}}}) \},\\&T_0=\sup {\mathbb {T}}. \end{aligned} \end{aligned}$$
(25)

If \(T_0=+\infty \), there exists solution \({\tilde{x}}(t)\in C^1([t_0,+\infty ),{\hat{{\mathcal {D}}}})\) to problem (DS-0). Indeed, by taking a monotone increasing sequence \(T_n\rightarrow +\infty \) and the corresponding sequence of solutions \(\{x_n(t)\}\), by Lemma 2 we get, for all \(n\in {\mathbb {N}}\) solution \(x_{n+1}\) is the prolongation of \(x_n\). Hence, the function

$$\begin{aligned} {\tilde{x}}(t)=\left\{ \begin{array}{lll} x_n(t), &{} t\in [T_{n-1},T_n), &{} n\ge 2\\ x_1(t), &{} t\in [t_0,T_1) \end{array}\right. \end{aligned}$$

is a solution defined on \([t_0,+\infty )\). Other solutions (which do not coincide with the restrictions of \({\tilde{x}}(t)\) on smaller intervals) do not exist by Lemma 2. In the rest of the proof, we show that this is the only possible case.

Consider now \(T_0<+\infty \). Then two cases are possible:

  1. (a)

    \(T_0\in {\mathbb {T}}\),

  2. (b)

    \(T_0\notin {\mathbb {T}}\).

In case (a) there exists a solution \(x(\cdot )\in C^1([t_0,T_0];{\hat{{\mathcal {D}}}})\) to problem (DS-0). But then, by Corollary 2, applied to our problem (DS-0) with \(t_0=T_0\) solution can be extended beyond \(T_0\) and both one-sided derivatives \({\dot{x}}_{-} (T_0)\) and \({\dot{x}}_{+} (T_0)\) exist and both equal \(F(x(T_0))\): left - by the definition of solutions on \([t_0,T_0]\), right - by the definition of solution to our problem with the beginning of the interval from \(T_0\). As a consequence, we get a solution on a larger interval and arrive to a contradiction with the definition of \(T_0\). This excludes case (a).

In case (b), by the arguments analogous to the case \(T_0=+\infty \), we get the existence and uniqueness of solutions x(t) of (DS-0) on the semi-interval \([t_0,T_0)\). Case (b) splits in two subcases:

  1. 1.

    \(\limsup _{t\rightarrow T_0^{-}} \Vert x(t)\Vert = + \infty \) (i.e. solution is unbounded in any left-sided interval of \(T_0\)),

  2. 2.

    \(\limsup _{t\rightarrow T_0^{-}} \Vert x(t)\Vert < + \infty \).

The subcase 1 is impossible in view of the boundedness of the set \({{\hat{{\mathcal {D}}}}}\). Now we show that the subcase 2 is also impossible.

Indeed, let the function x(t) be bounded on the whole half-interval \([0,T_0)\):

$$\begin{aligned} \exists C\ge 0 \forall t\in [t_0, T_0), \,\, \Vert x(t)\Vert \le C. \end{aligned}$$

We have

$$\begin{aligned} \forall t\in [t_0, T_0), \,\, \Vert F(x(t))\Vert \le M. \end{aligned}$$

However, from the equation (DS-0), it follows that the function x(t) is Lipschitz continuous with a constant M on \((t_0,T_0)\), since \(\Vert \dot{x}(t)\Vert \le M\) for all \(t\in (t_0,T_0)\).

Hence, by Lemma 5 (see “Appendix”), there exists the limit

$$\begin{aligned} Y_0=\lim _{t\rightarrow T_0^-} x(t). \end{aligned}$$

Let us put \(Y_0\) to be the value of x(t) at \(T_0\). The obtained function Y(t) will be continuous from the left at \(T_0\). Then, by Lemma 3 (see “Appendix”), the function \(F(Y(T_0))\) is also continuous from the left at \(T_0\) and hence

$$\begin{aligned} \lim _{t\rightarrow T_0^-} F(x(t))=\lim _{t\rightarrow T_0^-} F(Y(t))= F(Y_0). \end{aligned}$$

Since for \(t<T_0\) we have \({\dot{x}}(t)=F(x(t))\), from the last formula we get

$$\begin{aligned} \lim _{t\rightarrow T_0^-} {\dot{x}}(t)=F(Y_0). \end{aligned}$$

However, by Lemma about extendability at point (Lemma 4, see “Appendix”), it follows that the function x(t) can be extended from \([t_0,T_0)\) onto \([t_0,T_0]\) with preservation of continuous differentiability (let us denote the obtained function by Y(t)) and \({\dot{Y}}(t_0)=F(Y_0)\) and Y(t) is a solution on \([t_0,T_0]\). We arrive to a contradiction in the subcase 2 of case (b) (solutions on \([t_0, T_0]\) do not exist). \(\square \)

6 Behaviour of Trajectories at \(+\infty \)

In this section we prove Theorem 2 and provide other results concerning the convergence of trajectories.

Proposition 6

Let x(t), \(t\in [t_0,+\infty )\) be a solution of (DS-0). Suppose that there exists an increasing sequence \(\{t_n\}_{n\in {\mathbb {N}}}\), \(t_n\rightarrow +\infty \), such that \(x(t_n)\rightarrow {\bar{z}}\). Then \(x(t)\rightarrow {\bar{z}}\) as \(t\rightarrow +\infty \).

Proof

Let \(\{t_n\}_{n\in {\mathbb {N}}}\), \(t_n\rightarrow +\infty \) be such that \(x(t_n)\rightarrow {\bar{z}}\). We will show that for all \(\varepsilon >0\), for every increasing sequence \(\{s_n\}_{n\in {\mathbb {N}}}\), \(s_n\rightarrow +\infty \) there exists \(n_0\in {\mathbb {N}}\) such that for all \(n\ge n_0\), \(\Vert x(s_n)-{\bar{z}}\Vert \le \varepsilon \). Take any \(\varepsilon >0\) and an increasing sequence \(\{s_n\}_{n\in {\mathbb {N}}}\), \(s_n\rightarrow +\infty \).

We have

$$\begin{aligned} \frac{d}{dt}\Vert x(t)-{\bar{w}}\Vert ^2= 2\langle F(x(t)) \mid x(t)-{\bar{w}}\rangle \ge 0, \end{aligned}$$

hence the function \(\Vert x(\cdot )-{\bar{w}}\Vert ^2\) is nondecreasing. Moreover, by (9) (see also Lemma 6) and convergence of \(x(t_n)\), for all \(\varepsilon ^\prime >0\) there exists \(n_0^\prime \in {\mathbb {N}}\) such that for all \(n>n_0^\prime \)

$$\begin{aligned} \Vert x(t_n)-{\bar{w}}\Vert ^2\ge \Vert {\bar{w}}-{\bar{z}}\Vert ^2 -\varepsilon ^\prime . \end{aligned}$$

Take \(\varepsilon ^\prime =\varepsilon \) and \(n_0\) such that \(s_{n_0}\ge t_{n_0^\prime }\).

Then, by (9) and the fact that \(\Vert x(\cdot )-{\bar{w}}\Vert ^2\) is nondecreasing we obtain: for all \(n\ge n_0\)

$$\begin{aligned} \Vert x(s_n)-{\bar{z}}\Vert ^2\le \Vert {\bar{w}}-{\bar{z}}\Vert ^2-\Vert x(s_n)-{\bar{w}}\Vert ^2\le \Vert {\bar{w}}-{\bar{z}}\Vert ^2-\Vert x(t_{n_0}^\prime )-{\bar{w}}\Vert ^2\le \varepsilon .\ \end{aligned}$$

\(\square \)

Now we give now the proof of Theorem 2.

Proof of Theorem 2

By (12), we have \({{\tilde{x}}}={{\bar{z}}}\), i.e., \( x(t_{n_k})\) converges weakly to \({{\bar{z}}}\). By (9), the following inequality holds for this subsequence

$$\begin{aligned} \Vert x(t_{n_k})-{{\bar{w}}}\Vert ^2+ \Vert x(t_{n_k})-{{\bar{z}}}\Vert ^2\le \Vert {{\bar{w}}}-{{\bar{z}}}\Vert ^2,\quad k=1,2,\ldots \end{aligned}$$

and hence

figure a

Since the norm is weakly lower semicontinuous, we also have

$$\begin{aligned} \Vert {{\bar{z}}}-{{\bar{w}}}\Vert ^2\le \liminf _{k\rightarrow \infty } \Vert x(t_{n_k})-{{\bar{w}}}\Vert ^2. \end{aligned}$$

This and \((*)\) implies

$$\begin{aligned} \liminf _{k\rightarrow \infty }\Vert x(t_{n_k})-{{\bar{z}}}\Vert ^2= 0. \end{aligned}$$

Consequently, there is a subsequence \(t_{n_{k_m}}\) such that

$$\begin{aligned} \lim _{m\rightarrow \infty } \Vert x(t_{n_{k_m}})-{{\bar{z}}}\Vert =0. \end{aligned}$$

Thus we have shown that for any sequence \(\{t_n\}_{n\in {\mathbb {N}}}\), \(t_n\rightarrow \infty \), there exists a subsequence \(\{t_{n_{k_m}}\}_{m\in {\mathbb {N}}}\) such that the above condition holds.

This means that \(\Vert x(t)-{{\bar{z}}}\Vert \rightarrow 0\) as \(t\rightarrow +\infty \).\(\square \)

In the next two propositions we propose variants of Theorem 2 in which we replace assumption (12) by other assumptions.

In the finite-dimensional case, the assertion of Theorem 2 can be obtained without assuming (12). Instead we need to assume a strengthened form (*) of the assumption (C) on vector field F.

Recall that the assumption (C) says that \(\langle F(x) | {\bar{w}} - x \rangle \le 0\) for all \(x\in {\hat{{\mathcal {D}}}}\).

Proposition 7

Let \(\mathcal {X}\) be a finite-dimensional space, let x(t), \(t\in [t_0,+\infty )\) be a solution of (DS-0) and assume that

figure b

Then \(\lim \limits _{t\rightarrow +\infty } x(t) ={\bar{z}}\).

Proof

Let \(g(t){:}{=}\frac{d}{dt}\Vert x(t)-{\bar{w}}\Vert ^{2}\), \(t\ge t_{0}.\) We start by showing that there exists a sequence \(\{t_k\}\), \(t_k\rightarrow +\infty \) such that \(\lim \limits _{k\rightarrow +\infty } g(t_k)= 0\).

On the contrary, suppose that there exist \(\varepsilon >0\) and \(t^\prime \ge t_0\) such that \(g(t)>\varepsilon \) for all \(t>t^\prime \). Hence, for all \(t> t^{\prime }\)

$$\begin{aligned}&\Vert x(t)-{\bar{w}}\Vert ^2 - \Vert x(t_0)-{\bar{w}}\Vert ^2=\int _{t_0}^{t} g(s) ds \\&= \int _{t_0}^{t^\prime } g(s) \, ds + \int _{t^\prime }^{t} g(s) \, ds \ge \int _{t_0}^{t^\prime } g(s) \, ds + \int _{t^\prime }^{t} \varepsilon \, ds \\&=\int _{t_0}^{t^\prime } g(s) \, ds + (t-t^\prime ) \varepsilon . \end{aligned}$$

By taking

$$\begin{aligned} t>\frac{1}{\varepsilon }\left( \Vert {\bar{z}}-{\bar{w}}\Vert ^2-\Vert x(t_0)-{\bar{w}}\Vert ^2- \int _{t_0}^{t^\prime } g(s) \, ds\right) +t^\prime , \end{aligned}$$

we arrive to \(\Vert x(t)-{\bar{w}}\Vert ^2>\Vert {\bar{z}}-{\bar{w}}\Vert \), i.e. \(x(t)\notin {\hat{{\mathcal {D}}}}\) - a contradiction. In this way, we proved that there exists a sequence \(\{t_k\}_{k\in {\mathbb {N}}}\) such that \(t_k\rightarrow +\infty \) and \(\lim \limits _{k\rightarrow +\infty } g(t_k)= 0\).

Since \(\mathcal {X}\) is finite-dimensional and \({\hat{{\mathcal {D}}}}\) is closed, bounded, hence compact. There exists a subsequence of \(\{t_k\}_{k\in {\mathbb {N}}}\), namely \(\{ t_{k_n} \}_{n\in {\mathbb {N}}}\) such that \(x(t_{k_n})\) converges and \(\lim \limits _{n\rightarrow +\infty } x(t_{k_n})={\tilde{x}}\in {\hat{{\mathcal {D}}}}\). Without loss of generality, we may assume that the sequence \(\{t_{k_n} \}_{n\in {\mathbb {N}}}\) is increasing.

By (8),

$$\begin{aligned} \frac{d}{dt} \Vert x(t)-{\bar{w}}\Vert ^2 = 2\langle F(x(t)) | x(t) - {\bar{w}} \rangle \ge 0\quad \text {for all } t\ge t_0. \end{aligned}$$

We have

$$\begin{aligned} 0=\lim _{n\rightarrow +\infty } g(t_{k_n}) = \lim _{n\rightarrow +\infty } 2 \langle F(x(t_{k_n}))\mid x(t_{k_n})-{\bar{w}} \rangle = 2 \langle F({\tilde{x}}) \mid {\tilde{x}}-{\bar{w}} \rangle , \end{aligned}$$
(26)

hence, by assumption, \({\tilde{x}}={\bar{z}}\). Now the assertion follows from Proposition 6. \(\square \)

Remark 5

By examining the above proof, we see that the assertion of Proposition 7, remains true in infinite-dimensional Hilbert space \(\mathcal {X}\)

under additional assumption (to (*)) on F:

$$\begin{aligned}&F \text { can be extended to } \text {conv}\, {\hat{{\mathcal {D}}}} \text { in such way that }\\&F: \text {conv}\, {\hat{{\mathcal {D}}}}\rightarrow \mathcal {X}\text { is a weak-to-strong continuous on } \text {conv}\, {\hat{{\mathcal {D}}}}, \end{aligned}$$
(W-S)

i.e., for any weakly convergent sequence \({\hat{{\mathcal {D}}}}\ni x_{n}\rightharpoonup {\bar{x}}\) we have \(\lim \limits _{n\rightarrow +\infty } F(x_n)=F({\bar{x}})\), where the limit is strong.

The need of using this additional assumption follows from the fact that if \(v_{n}\rightarrow v\) and \(u_{n}\rightharpoonup u\), then \(\langle v_{n}| u_{n}\rangle \rightarrow \langle v| u\rangle \). Indeed,

$$\begin{aligned} |\langle v_{n}| u_{n}\rangle -\langle v| u\rangle | =|\langle v_{n}-v| u_{n}\rangle +\langle u_{n}| v\rangle -\langle v| u\rangle |\\ \le \Vert u_{n}\Vert \Vert v_{n}-v\Vert +|\langle u_{n}| v\rangle -\langle v| u\rangle |. \end{aligned}$$

This fact allows to show (26).

The following proposition is a variant of Proposition 7 valid in infinite-dimensional Hilbert space under a more restrictive form (C**) below of condition (*).

Proposition 8

Let \(\mathcal {X}\) be an infinite-dimensional space and let x(t), \(t\in [t_0,+\infty )\) be a solution of (DS-0). Assume that for all \(t\in [t_0,+\infty )\) such that \(x(t)\ne {\bar{z}}\), we have

figure c

where \(\alpha :[t_{0},+\infty ) \rightarrow {\mathbb {R}}_{-}\) is an integrable function on any interval \([t_{0},T]\), \(T>t_0\) and there exist \(T'>t_0\), \(\sqrt{T^\prime }-\frac{t_0}{\sqrt{T^\prime }}> \frac{1}{2}(\Vert {\bar{w}}-{\bar{z}}\Vert ^2-\Vert x(t_0)-{\bar{w}}\Vert ^2)\) and \(\varepsilon \le \frac{-1}{\sqrt{T^\prime }}\)

such that \(\sup \limits _{[t_{0},T'] } \alpha (s)<\varepsilon \). Then \(\lim \limits _{t\rightarrow +\infty } x(t) ={\bar{z}}\).

Proof

Let us note that in the case when there exists \(t^\prime \in [t_0,+\infty )\) such that \(x(t^\prime )={\bar{z}}\), then \(x(t)={\bar{z}}\) for all \(t>t^\prime \) since \(F(x(t^\prime ))=F({\bar{z}})=0\).

Consider now the situation that \(x(t)\ne {\bar{z}}\) for any \(t\in [t_0,+\infty )\). By contradiction, suppose that \(x(t)\not \rightarrow {\bar{z}}\). Then, in view of Proposition 6, there exists \(\varepsilon >0\) such that \(x(t)\notin B({\bar{z}},\varepsilon )\) for all \(t\in [t_0,+\infty )\).

We have that for all \(t>T'\)

$$\begin{aligned}&\Vert x(t)-{\bar{w}}\Vert ^2-\Vert x(t_0)-{\bar{w}}\Vert ^2 = \int _{t_0}^{t} \frac{d}{ds} \Vert x(s)-{\bar{w}}\Vert ^2\, ds \\&= 2\int _{t_0}^{t} \langle F(x(s)) | x(s)-{\bar{w}}\rangle \, ds \ge -2\int _{t_0}^{t} \alpha (s)\, ds \\&\ge -2(t-t_0)\cdot \sup _{s\in [t_0,t]} \alpha (s)\\&\ge -2(t-t_0)\cdot \varepsilon . \end{aligned}$$

Thereby for such \(t> \frac{\Vert {\bar{w}}-{\bar{z}}\Vert ^2}{2c}+t_0 \) we arrive to a contradiction with \(x(t)\in {\hat{{\mathcal {D}}}}\subset {\mathcal {D}}\).\(\square \)

Proposition 9

Let \(\mathcal {X}\) be an infinite-dimensional space and let x(t), \(t\in [t_0,+\infty )\) be a solution of (DS-0). Assume that for all \(\varepsilon \) such that \(0<\varepsilon <\Vert x_0-{\bar{z}}\Vert \) we have \(\inf \limits _{x \in {\hat{{\mathcal {D}}}} \setminus B({\bar{z}},\varepsilon )}\langle F(x) | {\bar{w}}-x \rangle <0\). Then \(\lim \limits _{t\rightarrow +\infty } x(t) ={\bar{z}}\).

Proof

If there exists \(t'\in [0,+\infty )\) such that \(\langle F(x(t')) \mid w-x(t')\rangle =0\) then we are done - in view of assumptions of the Proposition, \(x(t^\prime )={\bar{z}}\), and by (9) , Proposition 1, \(x(t)={\bar{z}}\) for all \(t\ge t'\).

Suppose that for all \(t\in [0,+\infty )\) we have \(\langle F(x(t)) \mid w-x(t)\rangle <0\). For any \(t>t_0\)

$$\begin{aligned}&\Vert {\bar{z}}-{\bar{w}}\Vert ^2-\Vert x(t_0)-{\bar{w}}\Vert ^2\ge \Vert x(t)-{\bar{w}}\Vert ^2-\Vert x(t_0)-{\bar{w}}\Vert ^2 = \int _{t_0}^t \frac{d}{ds} \Vert x(s)-{\bar{w}}\Vert ^2\, ds \\&= 2\int _{t_0}^t \langle F(x(s)) \mid x(s)-{\bar{w}}\rangle \, ds \ge -2(t-t_0)\cdot \sup _{s\in [t_0,T]} \langle F(x(s)) | x(s)-{\bar{w}}\rangle \\&=2(t-t_0)\cdot \inf _{s\in [t_0,T]} \langle F(x(s)) | w-x(s)\rangle \end{aligned}$$

Therefore \(\inf \limits _{s\in [t_0,t]} \langle F(x(s)) | w-x(s)\rangle \rightarrow 0\) as \(t\rightarrow +\infty \). Note that \(\alpha (s){:}{=}\langle F(x(s)) | w-x(s)\rangle \) is a continuous function on every \([t_0,t]\), \(t>t_0\). Hence, there exists an increasing sequence \(\{t_n\}_{n\in {\mathbb {R}}_+}\), \(t_n\rightarrow +\infty \) such that \( \langle F(x(t_n)) | x(t_n)-{\bar{w}}\rangle \rightarrow 0\). We claim that \(x(t_n)\rightarrow {\bar{z}}\).

Suppose on the contrary, that \(x(t_n)\not \rightarrow {\bar{z}}\) as \(n\rightarrow +\infty \). Then there exists \(\varepsilon >0\) and a subsequence \(\{t_{n_k}\}_{k\in {\mathbb {N}}}\) such that \(x(t_{n_k})\in {\hat{{\mathcal {D}}}} \setminus B({\bar{z}},\varepsilon )\). Since \(\inf \limits _{x \in {\hat{{\mathcal {D}}}} \setminus B({\bar{z}},\varepsilon )}\langle F(x) | {\bar{w}}-x \rangle <0\) we have that there exists \(c<0\) such that \(\langle F(x(t_{k_n})) | {\bar{w}}-x(t_{k_n}) \rangle <c\), a contradiction to \(\lim \limits _{k\rightarrow +\infty }\langle F(x(t_{k_n})) | {\bar{w}}-x(t_{k_n}) \rangle =0\) .

Hence \(x(t_n)\rightarrow {\bar{z}}\). Now the assertion follows from Proposition 6. \(\square \)

7 Projective Dynamical System

In this section, we give an example of the system (DS-0). Let \({\bar{w}}, {\bar{z}} \in \mathcal {X}\). We consider the projective dynamical system

$$\begin{aligned}&{\dot{x}}(t)=P_{{\mathbb {C}}(x(t))}({\bar{w}})-x(t),\\&x(t_0)=x_0\in {\hat{{\mathcal {D}}}},\ t_0\ge 0, \end{aligned}$$
(PDS)

where \({\mathbb {C}}: {\hat{\mathcal {D}}} \rightrightarrows \mathcal {X}\) is a multifunction such that:

\((A^{\prime })\):

for all \(x\in {\hat{\mathcal {D}}}\), \({\bar{z}}\in {\mathbb {C}}(x)\) and \(P_{{\mathbb {C}}(x)}({\bar{w}})=x\) iff \(x={\bar{z}}\),

\((B^{\prime })\):

for all \(x\in {\hat{{\mathcal {D}}}}\) we have \(P_{{\mathbb {C}}(x)}({\bar{w}})\in {\mathcal {D}}\),

\((C^{\prime })\):

\(\langle P_{{\mathbb {C}}(x)}({\bar{w}})-x \mid {\bar{w}} - x \rangle \le 0\) for all \(x\in {\hat{\mathcal {D}}}\),

\((D^{\prime })\):

for all \(x\in {\hat{\mathcal {D}}}\), \({\mathbb {C}}(x)\) is closed and convex.

Condition \((\mathrm{D}^{\prime })\) ensures that the projection onto \({\mathbb {C}}(x)\), \(x\in {\hat{{\mathcal {D}}}}\) is uniquely defined.

The condition \(\langle P_{{\mathbb {C}}(x)}({\bar{w}})-x \mid {\bar{w}} - x \rangle \le 0\) for all \(x\in {\hat{\mathcal {D}}}\) is equivalent to the condition that \(P_{{\mathbb {C}}(x)}({\bar{w}}) \in H({\bar{w}},x)\) for any \(x\in {\hat{\mathcal {D}}}\). This implies that for any \(x\in {\hat{\mathcal {D}}}\) and for any \(h\in {\mathbb {C}}(x)\) we have \(\langle h-x \mid {\bar{w}} - x \rangle \le 0\). The later implies \(P_{{\mathbb {C}}(x)}({\bar{w}}) \in H({\bar{w}},x)\). Therefore, \((\mathrm{C}^{\prime })\) is equivalent to the condition:

$$\begin{aligned} \forall x\in \hat{\mathcal {D}}\forall h\in {\mathbb {C}}(x), \quad \langle h-x | {\bar{w}}-x \rangle \le 0. \end{aligned}$$

Remark 6

Let us comment on the conditions \((\mathrm{A}^{\prime })\), \((\mathrm{B}^{\prime })\), (C\(^{\prime })\). The condition (A\(^{\prime })\) is equivalent to saying that \({\bar{z}}\) is the only stationary point of the vector field \(F(x)=P_{{\mathbb {C}}(x)}({\bar{w}})-x\) inside the considered set \({\hat{\mathcal {D}}}\). The condition (B\(^{\prime })\) together with the convexity of set \({\mathcal {D}}\) ensures that for any \(\uplambda \in [0,1]\), and for any \(x\in {\hat{\mathcal {D}}}\subset {\mathcal {D}}\) it is \((1-\uplambda ) x + \uplambda P_{{\mathbb {C}}(x)}({\bar{w}}) \in {\mathcal {D}}\). The condition (C\(^\prime )\) ensures that \(P_{{\mathbb {C}}(x)}\in H({\bar{w}},x)\) and the function \(t\mapsto \Vert x(t)-{\bar{w}}\Vert \) is nondecreasing (see e.g., Proposition 1), where x(t) is a solution of (PDS) (whenever it exists).

As consequences of Theorems 1 and 2 we can formulate the following theorems.

Theorem 3

Suppose that (A\(^\prime )\), (B\(^\prime )\), (C\(^\prime )\), (D\(^\prime )\) holds. Assume that \(x\mapsto P_{{\mathbb {C}}(x)}({\bar{w}})\) is locally Lipschitz continuous on \({\hat{{\mathcal {D}}}}\setminus \{{\bar{z}}\}\) and continuous on \({\hat{\mathcal {D}}}\). Then the system (PDS) has a unique solution on \([t_0,+\infty )\).

Proof

First, let us show that (A), (B), (C) hold. (A\(^\prime )\) implies that \({\bar{z}}\) is the only stationary point of (PDS), hence (A) holds.

Recall that \({\mathcal {D}}\) is a closed, convex subset of \({\mathbb {B}}(\frac{{\bar{w}}+{\bar{z}}}{2},\frac{\Vert {\bar{w}}-{\bar{z}}\Vert }{2})\) and \({\hat{\mathcal {D}}}\) is given as in (10). By (D\(^\prime )\), the projection \(P_{{\mathbb {C}}(x)}({\bar{w}})\) is well defined for all \(x\in {\hat{\mathcal {D}}}\). By (B\(^\prime )\) and (C\(^\prime )\), assumption (B) is satisfied since for all \(x\in {\hat{{\mathcal {D}}}}\subset {\mathcal {D}}\) and for any \(h\in [0,1]\)

$$\begin{aligned}&x+h(P_{{\mathbb {C}}(x)}({\bar{w}})-x)=(1-h)x + h P_{{\mathbb {C}}(x)}({\bar{w}}) \in {\mathcal {D}},\\&\Vert x+h(P_{{\mathbb {C}}(x)}({\bar{w}})-x)-{\bar{w}}\Vert ^2=\Vert x-{\bar{w}}\Vert ^2 \\&- 2h \langle P_{{\mathbb {C}}(x)}({\bar{w}})-x \mid {\bar{w}}-x \rangle + h^2 \Vert P_{{\mathbb {C}}(x)}({\bar{w}})-x\Vert ^2\ge \Vert x-{\bar{w}}\Vert ^2\ge r, \end{aligned}$$

i.e. \(x+h(P_{{\mathbb {C}}(x)}({\bar{w}})-x)\in {\hat{{\mathcal {D}}}}\). Note that by taking \(h=1\) we obtain that \(P_{{\mathbb {C}}(x)}({\bar{w}})\in {\hat{{\mathcal {D}}}}\) for any \(x\in {\hat{\mathcal {D}}}\). Assumption (C\(^\prime )\) is equivalent to (C) for \(F(x)=P_{{\mathbb {C}}(x)}({\bar{w}})-x\). Observe that the mapping \(F(x)=P_{{\mathbb {C}}(x)}({\bar{w}})-x\) is bounded on \({\hat{\mathcal {D}}}\). Indeed for any \(x\in {\hat{\mathcal {D}}}\) we have

$$\begin{aligned} \Vert P_{{\mathbb {C}}(x)}({\bar{w}})-x\Vert \le \Vert P_{{\mathbb {C}}(x)}({\bar{w}})\Vert + \Vert x\Vert \le 2R, \end{aligned}$$

where \(R=\sup _{x\in {\hat{\mathcal {D}}}} \Vert x\Vert \). Now, system (PDS) is of the form of (DS-0) with \(F(x)=P_{{\mathbb {C}}(x)}({\bar{w}})-x\) and all the assumptions of Theorem 1 are satisfied. The assertion of the theorem follows from Theorem 1.\(\square \)

Theorem 4

Suppose that (A\(^\prime )\), (B\(^\prime )\), (C\(^\prime )\), (D\(^\prime )\) holds. Assume that \(x\mapsto P_{{\mathbb {C}}(x)}({\bar{w}})\) is locally Lipschitz continuous on \({\hat{{\mathcal {D}}}}\setminus \{{\bar{z}}\}\) and continuous on \({\hat{\mathcal {D}}}\). Let x(t) be a solution of (PDS). Assume that for every increasing sequence \(\{t_n\}_{n\in {\mathbb {N}}}\), \(t_n\rightarrow +\infty \)

$$\begin{aligned} x(t_{n})\rightharpoonup {\tilde{x}} \implies {\tilde{x}}={\bar{z}}, \end{aligned}$$
(27)

Then \(x(t)\rightarrow {\bar{z}}\) as \(t\rightarrow +\infty \).

Proof

By the proof of Theorem 3, (PDS), assumptions (A), (B) and (C) are satisfied, and by assumption \(F(x)=P_{{\mathbb {C}}(x)}({\bar{w}})\) is locally Lipschitz continuous. Now the assertion follows from Theorem 2. \(\square \)

To investigate the local Lipschitzness of \(x \mapsto P_{{\mathbb {C}}(x)}({\bar{w}})\) on \({\hat{{\mathcal {D}}}}\setminus \{{\bar{z}}\}\) (and the continuity of \(x\mapsto P_{{\mathbb {C}}(x)}({\bar{w}})\) on \({\hat{{\mathcal {D}}}}\)) one should take into account the form of multifunction \({\mathbb {C}}\). Behaviour of the projection of a given \({\bar{w}}\) onto polyhedral multifunction \({\mathbb {C}}\) given by a finite number of linear inequalities and equalities were investigated in e.g. [8, Corollary 2], see also [25, Theorem 6.5].

Proposition 10

Let \({\mathbb {T}}: \mathcal {X}\rightarrow \mathcal {X}\), which appears in system (S), be a firmly quasinonexpansive operator, i.e.,

$$\begin{aligned} \forall x \in \mathcal {X}\ \forall y \in \text {Fix}\, {\mathbb {T}} ,\quad \Vert {\mathbb {T}} x-y \Vert ^2 + \Vert {\mathbb {T}} x - x \Vert ^2 \le \Vert x-y\Vert ^2. \end{aligned}$$

Assume that \({\bar{w}}\in \mathcal {X}\), \({\bar{w}}\notin \text {Fix}\, {\mathbb {T}}\) and \({\bar{z}}=P_{\text {Fix}\, {\mathbb {T}}}({\bar{w}})\), and let \({\mathcal {D}}={\mathbb {B}}(\frac{{\bar{w}}+{\bar{z}}}{2}, \frac{\Vert {\bar{w}}-{\bar{z}}\Vert }{2})\), \({\hat{\mathcal {D}}}=\{ x \in {\mathcal {D}} | \Vert x-{\bar{w}}\Vert ^2\ge r \}\) for some \(r\in (0,\Vert {\bar{w}}-{\bar{z}}\Vert ^2)\). Then the assumptions (A\(^\prime )\), (B\(^\prime )\), (C\(^\prime )\), (D\(^\prime )\) holds for the system (PDS) with \({\mathbb {C}}: {\mathcal {D}} \rightarrow \mathcal {X}\) defined as \({\mathbb {C}}(x){:}{=}H({\bar{w}},x)\cap H(x,{\mathbb {T}} x)\).

Proof

By [6, Corollary 4.25], we have

$$\begin{aligned} \begin{aligned} \text {Fix}\, {\mathbb {T}}&= \bigcap _{ x \in \mathcal {X}} \{ y \in \mathcal {X}\mid \langle y - {\mathbb {T}} x \mid x-{\mathbb {T}} x\rangle \le 0&=\bigcap _{ x \in \mathcal {X}} H(x,{\mathbb {T}} x). \end{aligned} \end{aligned}$$
(28)

Assumption (A\(^\prime )\) follows from (28) i.e., \(\text {Fix}\, {\mathbb {T}} \ni {\bar{z}}\in H(x,{\mathbb {T}} x)\) for all \(x\in {\hat{\mathcal {D}}}\) and \(x\in \text {Fix}\, {\mathbb {T}} \cap {\hat{\mathcal {D}}} \iff x = {\bar{z}}\). Assumption B\(^\prime \) follows from fact that for any \(x\in {\hat{D}}\subset {\mathcal {D}}\), \({\bar{z}}\in {\mathbb {C}}(x)\), hence \({\bar{z}}\in H({\bar{w}},P_{{\mathbb {C}}(x)}({\bar{w}}))\) and therefore \(P_{{\mathbb {C}}(x)}({\bar{w}})\in {\mathbb {B}}(\frac{{\bar{w}}+{\bar{z}}}{2}, \frac{\Vert {\bar{w}}-{\bar{z}}\Vert }{2})={\mathcal {D}}\) (see (8)). Assumption (C\(^\prime \)) follows from fact that for any \(x\in {\mathcal {D}}\), \({\mathbb {C}}(x)\subset H({\bar{w}},x)\), hence \(P_{{\mathbb {C}}(x)}({\bar{w}})\in H({\bar{w}},x)\). Assumption (D\(^\prime )\) is satisfied since \(H({\bar{w}},x)\cap H(x,{\mathbb {T}} x)\) is closed, convex.\(\square \)

Depending upon the choice of the operator \({\mathbb {T}}\) in Proposition 10 we obtain dynamical systems of the form (PDS) related to different algorithms. Within our approach we encompass the following dynamical systems related to the following algorithms.

  1. Ex 1.

    When \({\mathbb {T}}: \mathcal {X}\rightarrow \mathcal {X}\) is firmly quasinonexpansive and \((Id-{\mathbb {T}})\) is demiclosed at 0, dynamical system (DS-0) corresponds to the best approximation algorithm for finding a point \({\bar{z}}\) from the set of fixed points of \({\mathbb {T}}\), i.e., for finding \({\bar{z}}\in \mathcal {X}\) such that \({\bar{z}}=P_{\text {Fix}{\mathbb {T}}}({\bar{w}})\) (see [6, Theorem 30.8]).

  2. Ex 2.

    When \({\mathbb {T}}=J_{A}\), where \(A:\mathcal {X}\rightrightarrows \mathcal {X}\) is maximally monotone, dynamical system (DS-0) corresponds to the best approximation algorithm for finding \(x\in \mathcal {X}\) such that \(0\in Ax\) (see [6, Corollary 30.11]). Let us recall that resolvent operator of A is defined as \(J_{A}: \mathcal {X}\rightarrow \mathcal {X}\), \(J_{A}=(Id-A)^{-1}\).

  3. Ex 3.

    When \({\mathbb {T}}=(1/2)(Id+J_{\gamma A}\circ (Id-\gamma B))\), \(A:\mathcal {X}\rightrightarrows \mathcal {X}\) is maximally monotone, \(B: \mathcal {X}\rightarrow \mathcal {X}\) is \(\beta \)-cocoercive, \(\gamma \in [0,2\beta ]\), dynamical system (DS-0) corresponds to the best approximation algorithm for finding \(x\in \mathcal {X}\) such that \(0\in Ax+Bx\) (see [6, Corollary 30.12]).

  4. Ex 4.

    When \({\mathbb {T}}:\mathcal {H}\times \mathcal {G}\rightarrow \mathcal {H}\times \mathcal {G}\) is defined as

    $$\begin{aligned} \begin{aligned} {\mathbb {T}}(x)=P_{H(x)}(x),\ H(x){:}{=}\{ h \in \mathcal {H}\times \mathcal {G}\mid \langle h | s^{*}(x)\rangle \le \eta (x) \}, \end{aligned} \end{aligned}$$
    (29)

    and, for any \(x=(p,v^{*})\in \mathcal {H}\times \mathcal {G}\),

    $$\begin{aligned} \left. \begin{aligned}&s^{*}(x) {:}{=} (a^{*}(x) + L^{*}b^{*}(x), b(x) - La(x)); \\&\eta (x) {:}{=}\langle a(x) | a^{*}(x) \rangle + \langle b(x) \mid b^{*}(x) \rangle ;\\&a(x) {:}{=} J_{\gamma A} (p - \gamma L^{*} v^{*}),\quad b(x) {:}{=} J_{\mu B} (L p + \mu v^{*}) ; \\&a^{*}(x) {:}{=} {}^{\gamma }A(p - \gamma L^{*} v^{*}),\quad b^{*}(x) {:}{=} {}^{\mu }{B}(Lp+ \mu v^{*}), \gamma ,\mu \in (0,1), \end{aligned} \right\} \end{aligned}$$
    (30)

    dynamical system (DS-0) corresponds to the best approximation algorithm for finding \((p,v^{*})\in \mathcal {H}\times \mathcal {G}\) such that

    $$\begin{aligned} 0\in Ap+B(Lp) \quad \text {and}\quad 0\in -LA^{-1}(-Lv^{*})+B^{-1}v^{*} \end{aligned}$$

    (see [2]). Let us recall that for any \(\gamma >0\), \({}^\gamma A:\mathcal {H}\rightarrow \mathcal {H}\) is Yosida approximation of A, \({}^\gamma A = \frac{1}{\gamma } (Id - J_{\gamma }A)\).

For other multifunctions \({\mathbb {C}}\) and other properties of projections onto moving sets, see, e.g., [29, Theorem 3.1], [15, Theorem 3.10], [33, Theorem 2.1], [24, Proposition 5.2] and [25, Example 6.4].