Abstract
We introduce a dynamical system to the problem of finding zeros of the sum of two maximally monotone operators. We investigate the existence, uniqueness and extendability of solutions to this dynamical system in a Hilbert space. We prove that the trajectories of the proposed dynamical system converge strongly to a primal–dual solution of the considered problem. Under explicit time discretization of the dynamical system we obtain the best approximation algorithm for solving coupled monotone inclusion problem.
Similar content being viewed by others
1 Introduction
Let \(\mathcal {H}\), \(\mathcal {G}\) be Hilbert spaces. We consider the problem of finding \(p\in \mathcal {H}\) such that
where \( A: \mathcal {H}\rightarrow \mathcal {H}, \ B: \mathcal {G}\rightarrow \mathcal {G}\) are maximally monotone operators, \(L: \mathcal {H}\rightarrow \mathcal {G}\) is a bounded, linear operator. Together with problem (P) we consider the dual problem formulated as finding \(v^{*}\in \mathcal {G}\) such that
To problems (P) and (D) we associate Kuhn-Tucker set defined as
The set Z is nonempty if and only if there exists a solution of the primal problem (P) and to the dual problem (D) (see [26, Corollary 2.12]).
Our aim in this paper is to investigate, for a given \(x_{0}, {\bar{w}}\in \mathcal {H}\times \mathcal {G}\), the following dynamical system, solution of which asymptotically approaches solution of (P)-(D),
where \({\mathbb {T}} : \mathcal {H}\times \mathcal {G}\rightarrow \mathcal {H}\times \mathcal {G}\), fixed point set of the operator \({\mathbb {T}}\) is Z (\(\text {Fix}{\mathbb {T}} = Z\)), with Z defined by (Z) and\(Q:(\mathcal {H}\times \mathcal {G})^{3}\rightarrow \mathcal {H}\times \mathcal {G}\),
is the projection P of the element \({\bar{w}}\) onto the set \(H({\bar{w}},b)\cap H(b,c)\) which is the intersection of two hyperplanes of the form
In particular, \( H({\bar{w}},b)=\{ h\in \mathcal {H}\times \mathcal {G}\mid \langle h-b | {\bar{w}}-b \rangle \le 0 \}. \)
Under explicit discretization with step size equal to one the system (S) becomes the best approximation algorithm for finding fixed point of \({\mathbb {T}}\) introduced in [2, Proposition 2.1] (see also [6, Theorem 30.8]),
with the choice of \(x_{n+1/2}{:}{=}{\mathbb {T}}(x_n)\) and the starting point \(x_0\). The characteristic feature of this algorithm is the strong convergence of the sequence \(x_n\) to a fixed point of \({\mathbb {T}}\) (see also [5]). In contrast to this, a dynamical system investigated, e.g. in [11], is related to other primal–dual method which exhibits weak convergence.
In case when \(A=\partial f\), \(B=\partial g\), \(f: \mathcal {H}\rightarrow {\mathbb {R}}\cup \{+\infty \}\), \(g: \mathcal {H}\rightarrow {\mathbb {R}}\cup \{+\infty \}\) are proper convex, lower semicontinuous (l.s.c.) functions, the problem (P) (if solvable) reduces to finding a point \(p\in \mathcal {H}\) solving the following minimization problem (see [27])
and (D) reduces to finding a point \(v^{*}\in \mathcal {G}\) solving the following maximization problem
First order dynamical systems related to optimization problems have been discussed by many authors (see, e.g., [1, 4, 9, 10, 12]). In those papers, a natural assumption is that the vector field F is globally Lipschitz and consequently, the existence and uniqueness of solutions to the dynamical system is guaranteed by classical results (see e.g. [13, Theorem 7.3]). For instance, Abbas, Attouch and Svaiter considered the following system in [1]
where \(\varPhi : \mathcal {H}\rightarrow {\mathbb {R}}\cup \{+\infty \}\) is a proper, convex and l.s.c. function defined on a Hilbert space \(\mathcal {H}\), \(B: \mathcal {H}\rightarrow \mathcal {H}\) is \(\beta \)-cocoercive operator and \(\text {prox}_{\mu \varPhi }: \mathcal {H}\rightarrow \mathcal {H}\) is a proximal operator defined as
Furthermore, Boţ and Csetnek, in [9], studied the dynamical system
where \(T: \mathcal {H}\rightarrow \mathcal {H}\) is a nonexpansive operator, \(\uplambda : [0,\infty ) \rightarrow [0,1]\) is a Lebesgue measurable function. By applying in (7) operator T defined as \(T=J_{\gamma A}( Id- \gamma B )\), where \(A: \mathcal {H}\rightarrow \mathcal {H}\) is a maximally monotone operator, the system (7), under special discretization (see e.g. [9, Remark 8]), leads to the forward-backward algorithm for solving operator inclusion problems in the form
For other discretizations see e.g., [28, Section 2.3].
The most essential difference between (S) and the systems (6), (7) is that, in general, one cannot expect the vector field Q given in (S) is globally Lipschitz with respect to variable x as it is the case of dynamical systems (6) and (7).
The contribution of the present investigation is as follows. We formulate the problem and provide preliminary facts in Sects. 2 and 3, respectively. In Sect. 4 we prove the existence and uniqueness of solutions to dynamical system (S) by studying a more general problem (DS-0). Extendability of solutions to dynamical system (DS-0) is studied in Sect. 5. The behaviour at \(+ \infty \) of solutions to (DS-0) is investigated in Sect. 6. In Sect. 7 we present applications of the results obtained for (DS-0) to projected dynamical systems (PDS).
2 Formulation of the Problem
Suppose that the set Z given by (Z) is nonempty. Then for all \(x\in \mathcal {H}\times \mathcal {G}\), \(Z\subset H(x,{\mathbb {T}} x)\). Let \({\bar{w}}\in \mathcal {H}\times \mathcal {G}\) and \({\bar{z}}=P_{Z}({\bar{w}})\). Let us define an open ball in Hilbert space \(\mathcal {H}\times \mathcal {G}\) centered at \(a\in \mathcal {H}\times \mathcal {G}\) with some radius \(R>0\) as follows:
and its closure by
We limit ourselves to a closed subset \({\mathcal {D}}\subset \mathcal {H}\times \mathcal {G}\) such that for all \(x \in {\mathcal {D}}\) we have \({\bar{z}}\in H({\bar{w}},x)\). This latter conditions ensures that \({\bar{z}}\) is an equilibrium point of
The fact that
implies the following
Therefore, we will limit our attention to \(Q({\bar{w}},\cdot ,{\mathbb {T}} (\cdot ))\) given by (1) to be defined on \({\mathcal {D}}\subset \bar{{\mathbb {B}}}\left( \frac{{\bar{w}}+{\bar{z}}}{2},\frac{\Vert {\bar{w}}-{\bar{z}}\Vert }{2}\right) \).
Let us note that for \(x={\bar{w}}\) we have \(H({\bar{w}},x)=\mathcal {H}\times \mathcal {G}\). This motivates us to restrict our investigations to set \({\hat{\mathcal {D}}}{:}{=}{\mathcal {D}} \setminus B({\bar{w}},r)\) for some \(r>0\) such that \({\hat{\mathcal {D}}}\) is nonempty.
System (S) is an autonomous dynamical system of the form
where \(F: {\hat{\mathcal {D}}} \rightarrow \mathcal {X}\), \(\mathcal {X}\) - Hilbert space, is a continuous function, locally Lipschitz on \({\hat{\mathcal {D}}}\) except a single point \({\bar{z}}\in {\hat{\mathcal {D}}}\), and \({\hat{\mathcal {D}}}\) is a closed and bounded set in \(\mathcal {X}\). Indeed, when \(F(x){:}{=}Q({\bar{w}},x,{\mathbb {T}} x)-x\), where \({\mathbb {T}}: \mathcal {H}\times \mathcal {G}\rightarrow \mathcal {H}\times \mathcal {G}\) is defined as in (30) and \(Q: (\mathcal {H}\times \mathcal {G})^3 \rightarrow \mathcal {H}\times \mathcal {G}\) is defined in (2), the system (DS) reduces to (S). For other applications we refer the reader to Sect. 7.
A survey of existing results on solvability and uniqueness of solutions going beyond the classical Cauchy–Picard theorem from finite to infinite settings journey can be found in [20].
Main difficulties in investigating the existence to autonomous ODE in infinite-dimensional settings are due to the lack of compactness, see [22, Remark 5.1.1]. For instance, the continuity of the right-hand side vector field F is not enough to obtain the counterpart of Peano’s theorem in infinite-dimensional spaces [17], even in Hilbert spaces [34].
In [18] Godunov proved that in every infinite-dimensional Banach space there exists a continuous vector field F such that there is no solution to the related (DS) whereas the global Lipschitz condition, due to Cauchy-Lipschitz-Picard-Lindeloff, of the right-hand side field ensures the uniqueness and/or extendability of the solution, see [13, Theorem 7.3]. Some attempts to weaken the global Lipschitz condition of the right-hand side vector field have been done in the context of the existence of solutions, see, e.g., [22, Theorem 5.1.1] and [19, 23, 30, 31] and the references therein. It is observed that the local Lipschitzness of the vector filed allows to prove the local existence and uniqueness for the related problems. For instance, one can adapt [22, Theorem 5.1.1] to the case of autonomous differential system in the following way
Corollary 1
Define the rectangle \(R_0 = \{ x \in \mathcal {X}| \Vert x-x_0\Vert \le \beta \}\) for some \(\beta >0\). Let \(f: R_0 \rightarrow \mathcal {X}\). Assume that \(\Vert f(x)\Vert \le {\tilde{M}}\) for \(x\in R_0\) and \(\Vert f(x_1)-f(x_2)\Vert \le K \Vert x_1-x_2\Vert \) for \(x_1,x_2\in R_0\), where K and \({\tilde{M}}\) are nonnegative constants. Let \({\alpha }>0\) such that \(\alpha \le \frac{\beta }{{\tilde{M}}}\). Then there exists one and only one (strongly) continuously differentiable function x(t) satisfying
Let us note that Corollary 1 is non-applicable to system (DS) in case when \(x_{0} \notin \text {int} {\hat{\mathcal {D}}}\) (see also Remark 4 below). Moreover, it was shown that local Lipschitzness condition is not enough to guarantee existence of trajectories on \([t_{0},+\infty )\) (see e.g., [21] and references therein). Instead of this, in Sects. 4 and 5 we will be using modified standard techniques to show the existence and uniqueness of solutions to (DS).
In [14] a smooth vector field is constructed such that the respective autonomous dynamical system has a bounded maximal solution which is not globally defined.
In finite-dimensional settings, under the assumption of local Lipschitzness and some boundedness of the vector field, the existence and uniqueness of the trajectory on \([t_{0},+\infty )\) are shown in [32] by Xia and Wang. The authors applied their results to investigations of projected dynamical systems.
3 Preliminaries
In this section we formulate the system (S) (and (DS)) in the general form.
Let \({\bar{w}},{\bar{z}} \in \mathcal {X}\) and the associated norm in Hilbert space \(\mathcal {X}\) be defined as \(\Vert \cdot \Vert =\sqrt{\langle \cdot | \cdot \rangle }\). Let \({\mathcal {D}}\subset \mathcal {X}\) be a closed convex subset of \(\mathcal {X}\) such that \({\bar{w}},{\bar{z}}\in {\mathcal {D}}\) and
Note that the condition (9) immediately implies that \({\bar{w}}\) and \({\bar{z}}\) are boundary points of the set \({\mathcal {D}}\).
Let r be such that \(\Vert {\bar{w}}-{\bar{z}}\Vert ^{2}>r>0\). Throughout this paper, we consider set \({\hat{{\mathcal {D}}}}\) related to \({\mathcal {D}}\) (see Fig. 1):
We consider the following Cauchy problem
where \(F: {\hat{{\mathcal {D}}}} \rightarrow \mathcal {X}\) is a continuous function on \({\hat{{\mathcal {D}}}}\) and locally Lipschitz on \({\hat{{\mathcal {D}}}}\setminus \{ {\bar{z}} \}\) and bounded on \({\hat{{\mathcal {D}}}}\) (\(\Vert F(x)\Vert \le M\), \(M>0\), \(x\in {\hat{{\mathcal {D}}}}\)).
Moreover, we assume:
-
(A)
\({\bar{z}}\) is the only zero point of F in \({\hat{{\mathcal {D}}}}\), i.e. \(F(x)=0\) iff \(x={\bar{z}}\).
-
(B)
for all \(x\in {\hat{{\mathcal {D}}}}\), for all \(h\in [0,1]\) we have \(x+hF(x)\in {\hat{{\mathcal {D}}}}\).
Together with assumptions (A), (B) we also consider the following assumption related to the behaviour of projectionFootnote 1:
-
(C)
\(\langle F(x) | {\bar{w}} - x \rangle \le 0\) for all \(x\in {\hat{{\mathcal {D}}}}\).
Remark 1
The motivation for considering a nonconvex set \({\hat{{\mathcal {D}}}}\) comes from the following observation. Consider \(F: {\mathcal {D}} \rightarrow \mathcal {X}\) defined as
where \(P_{{\mathbb {C}}(x)}({\bar{w}})\) is the projection of \({\bar{w}}\) onto \({\mathbb {C}}(x)\), \({\mathbb {C}}: {\mathcal {D}} \rightrightarrows \mathcal {X}\) is a multifunction given by \({\mathbb {C}}(x)=H({\bar{w}},x)\cap H(x,g(x))\) (see formula (2) for \(H(\cdot ,\cdot )\)) and \(g: \mathcal {X}\rightarrow \mathcal {X}\) satisfies \({\bar{z}}\in H(x,g(x))\) for all \(x\in \mathcal {X}\). Under a suitable assumption on g, the function F given by (11) is locally Lipschitz on \({\mathcal {D}}\setminus \{{\bar{w}},{\bar{z}}\}\) (see e.g. [7]), continuous on \({\mathcal {D}}\setminus \{{\bar{w}}\}\) and bounded on \({\mathcal {D}}\).
Throughout the paper we use the following concept of solutions for dynamical systems (DS-0) and (DS) and its extendibility.
Definition 1
Let
Solution of
where \(F:\ A \rightarrow \mathcal {X}\), \(A\subseteq \mathcal {X}\), on interval \({{\mathcal {T}}}\) is any function
satisfying
-
1.
initial condition \(x(t_{0})=x_{0}\);
-
2.
equation \({\dot{x}}(t)=F(x(t))\) for all \(t\in {{\mathcal {T}}}\), where the differentiation is understood in the sense of strong derivative on space \(\mathcal {X}\) , where at the boundary point of the interval \({{\mathcal {T}}}\), in the case when it belongs to \({{\mathcal {T}}}\), the differentiation is understood in the one-sided way.
Definition 2
A solution x(t) to problem (DS). on interval \({\mathcal {T}}_1=[0,T]\) (or \({\mathcal {T}}_1=[0,T)\)) is called non-extendable if there is no solution \(x_2(\cdot )\in C^1({\mathcal {T}}_2,{\hat{{\mathcal {D}}}})\) on any interval \({\mathcal {T}}_2\) of this problem satisfying conditions:
-
1.
\({\mathcal {T}}_2\supsetneq {\mathcal {T}}_1\);
-
2.
\(\forall t\in {\mathcal {T}}_1,\quad x_{2}(t)=x(t)\).
Remark 2
If x(t) is a solution of Cauchy problem (DS-0) on interval \({\mathcal {T}}=[0,T]\) (or \({\mathcal {T}}=[0,T)\)), then restriction of x(t) on any interval \({\mathcal {T}}_1=[t_{0},t_{1}]\subset {\mathcal {T}}\) (or \({\mathcal {T}}_1=[t_{0},t_{1})\subset {\mathcal {T}}\)) is a solution of Cauchy problem (DS-0) on \({{\mathcal {T}}}_1\) with initial condition \(x_0=x(t_0)\).
The main results on the existence, uniqueness and extendibility of solutions to (DS) read as follows.
Theorem 1
(Existence and uniqueness) Suppose that assumptions (A), (B) and (C) hold. There exists a unique solution of (DS-0) on \([t_{0},+\infty )\).
Theorem 2
(Behavior at \(+\infty \)) Let x(t) be a solution of (DS-0) on \([t_{0},+\infty )\). Assume that
for every increasing sequence \(\{t_n\}_{n\in {\mathbb {N}}}\), \(t_n\rightarrow +\infty \)
where x(t) is a unique solution of (DS-0).
Then the trajectory x(t) satisfies the condition \(\lim _{t\rightarrow +\infty } x(t)={\bar{z}}\), where convergence is understood in the sense of the norm of \(\mathcal {X}\).
Remark 3
Condition (12) can be seen as a continuous analogue of condition (iv) of Proposition 2.1 of [2]. Namely, to obtain the strong convergence of the sequence generated by (3) it is assumed in Proposition 2.1 of [2] that for any strictly increasing sequence \(\{k_{n}\}\subset {\mathbb {N}}\) the following implication holds:
4 Solutions to (DS-0) on Closed Intervals
In this section we consider the existence and uniqueness of solutions to (DS-0) defined on closed intervals, namely, \([t_{0},T]\), where \(T>t_{0}\) is finite. In deriving existence and uniqueness results, we modify two standard approaches (with the help of assumptions (A)–(C)): Euler method (Sect. 4.1) and contraction mapping principle (Sect. 4.2). To this aim we will use the following proposition.
Proposition 1
Assume that (C) holds. Then any solution x(t) of (DS-0) satisfies the condition
Proof
Let us note that x(t) is continuously differentiable on \( [t_{0},+\infty ) \), therefore by (C) we have
\(\square \)
Now we show the uniqueness of trajectories.
Proposition 2
Let \(t_{0}\ge 0\) and let \(x_{0}\in {\hat{{\mathcal {D}}}}\setminus \{ {\bar{z}} \}\). Assume that assumptions (A) and (C) holds. If (DS-0) is solvable in a given interval \([t_0,T]\), then the solution is unique on this interval.
Proof
Now we show the uniqueness of solutions of (DS-0) on \([t_{0},T]\). Suppose that \(x_{1}(\cdot )\) and \(x_{2}(\cdot )\) solve (DS-0) on interval \([t_0,T]\). Let \({\bar{t}}\in [t_{0},T]\) be such that
Let us note that \(x_{1}(t_{0})=x_{00}=x_{2}(t_{0})\). Consider two cases:
-
Case 1
: \(x_{1}({\bar{t}})=x_{2}({\bar{t}})={\bar{z}}\). Then, by Proposition 1, \(\Vert x_{1}(t)-{\bar{w}}\Vert \ge \Vert {\bar{z}}-{\bar{w}}\Vert \) and \(\Vert x_{2}(t)-{\bar{w}}\Vert \ge \Vert {\bar{z}}-{\bar{w}}\Vert \) for \(t\ge {\bar{t}}\). However, since \({\bar{z}}\in {\hat{\mathcal {D}}}\) and, by (8), \({\hat{\mathcal {D}}}\subset {\mathcal {D}} \subset \bar{{\mathbb {B}}}(\frac{{\bar{w}}+{\bar{z}}}{2},\frac{\Vert {\bar{w}}-{\bar{z}}\Vert }{2})\),
$$\begin{aligned} \{ x\in \mathcal {X}\mid \Vert x-{\bar{w}}\Vert \ge \Vert {\bar{w}}-{\bar{z}}\Vert \} \cap {\hat{{\mathcal {D}}}}= \{{\bar{z}}\}. \end{aligned}$$Therefore, by assumption (A), \(x_{1}(t)=x_{2}(t)={\bar{z}}\) for all \(t\in [{\bar{t}},T]\).
-
Case 2
: \(x_1({\bar{t}})=x_2({\bar{t}})\ne {\bar{z}}\) and \({\bar{t}}< T\). Then, by the local Lipschitzness of \(F(\cdot )\) on \({\hat{{\mathcal {D}}}}\setminus \{{\bar{z}}\}\) there exists a neighbourhood of \(x_1({\bar{t}})\), namely \(U(x_1({\bar{t}}))\) such that F is locally Lipschitz in \(U(x_1({\bar{t}}))\) with some constant \(L_{x_1({\bar{t}})}\), i.e.,
$$\begin{aligned} \forall x^1,x^2 \in U(x_1({\bar{t}})) \quad \Vert F(x^1)-F(x^2)\Vert \le L_{x_1({\bar{t}})} \Vert x^1-x^2\Vert . \end{aligned}$$Since \(x_1\) and \(x_2\) are Lipschitz functions with constant M there exists a neighbourhood \(V({\bar{t}})\cap [t_0,T]\) such that
$$\begin{aligned} \forall t\in V({\bar{t}})\cap [t_0,T] \quad x_1(t)\in U(x_1({\bar{t}})) \quad \wedge \quad x_2(t)\in U(x_1({\bar{t}})). \end{aligned}$$Then for \(t\in V({\bar{t}})\cap [t_0,T]\)
$$\begin{aligned}&\frac{d}{dt}\left( \frac{1}{2}\Vert x_{1}(t)-x_{2}(t)\Vert ^{2}\right) =\langle {\dot{x}}_{1}(t)-{\dot{x}}_{2}(t) | x_{1}(t)-x_{2}(t)\rangle \\&= \langle F(x_{1}(t))-F(x_{2}(t)) | x_{1}(t)-x_{2}(t)\rangle \le L_{x_{1}({\bar{t}})}\Vert x_{1}(t)-x_{2}(t)\Vert ^{2}. \end{aligned}$$By using Gronwall’s inequality for the function \(t\rightarrow \Vert x_{1}(t)-x_{2}(t)\Vert ^{2}\) we obtain that \(\Vert x_{1}(t)-x_{2}(t)\Vert ^{2}\le 0\), i.e., \(x_{1}(t)=x_{2}(t)\) for \(t\in V({\bar{t}})\cap [t_{0},T]\). This contradicts 13 with \({\bar{t}}\ne T\).
\(\square \)
Proposition 3
x(t) is a solution of (DS-A) on \({\mathcal {I}}=[t_0,T]\) (\(T>t_0\) is arbitrary) if and only if it satisfies the condition
where the integral is understood in the sense of Riemann and \(x(t)\in {\hat{{\mathcal {D}}}}\), \(t\in {\mathcal {I}}\).
Let us define
and
Let us note that \({\mathbb {B}}_{t_0,T}\) is a complete metric space due to the fact that \({\hat{{\mathcal {D}}}}\) is a closed subset of a Hilbert space \(\mathcal {X}\). Moreover, in the sequel we consider on \({\hat{{\mathcal {D}}}}\) the topology induced by the topology of the space.
4.1 Euler Method
We start with the following construction of Euler trajectories.
For any \(\uplambda \in (0,1]\) define \(c_n^{\uplambda }\), \(n=0,1,\dots \) as follows
Then, for any \(\uplambda \in (0,1]\) define a continuous trajectory on \([t_0,T]\) as follows
Proposition 4
Let \(t_0>0\) and let \(x_0\in {\hat{{\mathcal {D}}}}\setminus \{ {\bar{z}} \}\). Assume that (B) hold.
-
1.
If \(\mathcal {X}\) is finite-dimensional, then for all \(T>t_0\) there exists a solution of x(t) of (DS-0) on \([t_0,T]\) in the class \({\mathbb {B}}_{t_0,T}\),
-
2.
If \(\mathcal {X}\) is infinite-dimensional, then there exists \(R>0\) and \(T>t_0\) such that there exists a solution of x(t) of (DS-0) on \([t_0,T]\) in the class \({\mathbb {B}}_{t_0,x_0,T}^R\),
Proof
Let us start with the initial settings.
-
1.
In case \(\mathcal {X}\) is finite-dimensional we take any \(T>t_0\). Let us note that in this case \({\hat{{\mathcal {D}}}}\) is closed and bounded, hence compact. Since F is continuous on \({\hat{{\mathcal {D}}}}\), F is uniformly continuous, i.e.
$$\begin{aligned} \forall \varepsilon>0\ \exists \delta >0\ \forall x_1,x_2\in {\hat{{\mathcal {D}}}} \quad \Vert x_{1}-x_{2}\Vert< \delta \implies \Vert F(x_{1})-F(x_{2})\Vert < \varepsilon . \end{aligned}$$ -
2.
In case \(\mathcal {X}\) is infinite-dimensional let \(T=\frac{R}{M}+t_0\), where R is such that \(F(\cdot )\) is Lipschitz on \(B(x_0,R)\). Let \(m_\uplambda {:}{=}\lceil (T-t_0)\uplambda ^{-1}\rceil \). Let us note that, by the fact that \(x_0\in {\hat{{\mathcal {D}}}}\setminus \{{\bar{z}}\}\) and, by assumption (B), for any \(\uplambda \in (0,1]\) and all \(t\in [t_0,T]\), \(c_\uplambda (t)\in {\hat{{\mathcal {D}}}}\). For any \(\uplambda \in (0,1]\) function \(c_\uplambda (\cdot )\) given by (16) is differentiable on \([t_0,T]\setminus \{t_{0},t_{0}+\uplambda ,\dots , t_0+m_{\uplambda } \uplambda \}\) as a piecewise affine function. For all \(\uplambda \in (0,1]\) and any \(t\in [t_0,T]\) (\(t=t_0+a\uplambda +{\tilde{t}}\), \(a\in {\mathbb {N}}\), \(0\le {\tilde{t}}<\uplambda \)) we have
$$\begin{aligned}&\Vert c_\uplambda (t)-x_0\Vert = \Vert x_0 + \uplambda \sum _{n=0}^{a-1} F(c_{\uplambda }(t_0+n\uplambda ))+{\tilde{t}}F(c_{\uplambda }(t_0+a\uplambda )) - x_0\Vert \\&\le \uplambda \sum _{i=0}^{a-1} M + {\tilde{t}} M= M(a\uplambda +{\tilde{t}})\le M (T-t_0) =\frac{R}{M} M = R. \end{aligned}$$Let us note that in this case F is uniformly continuous on \(B(x_0,R)\cap {\hat{{\mathcal {D}}}}\), i.e.
$$\begin{aligned} \forall \varepsilon>0\ \exists \delta >0\ \forall x_1,x_2\in B(x_0,R)\cap {\hat{{\mathcal {D}}}} \quad \Vert x_1-x_2\Vert< \delta \implies \Vert F(x_1)-F(x_2)\Vert < \varepsilon . \end{aligned}$$
Now let us continue the proof in both cases 1. and 2. together. For any \(\uplambda \in (0,1]\) define
Note that for all \(t\in [t_0,T]\),
We have
Let us note that \(c_\uplambda (\cdot )\) is Lipschitz continuous on \([t_0,T]\) because it is differentiable almost everywhere and the norm of its derivative is bounded by M. Therefore
Fix any \(\varepsilon >0\) and take \(\uplambda \in (0,1]\) such that \( M \uplambda <\delta \). Then for all \(n=0,\dots ,m_\uplambda \) we have
and consequently
Hence, for all \(\uplambda < \frac{\delta }{M}\), we have
Thus,
Let \(\{\uplambda _k\}_{k\in {\mathbb {N}}}\) be a sequence in (0, 1] such that \(\uplambda _{k}\rightarrow 0\) as \(k\rightarrow 0\). By the Ascoli-Arzela Theorem, there exists a uniformly convergent subsequence of \(\{ c_{\uplambda _k}(t) \}_{k\in {\mathbb {N}}}\), namely \(\{ c_{\uplambda _{k_i}}(t) \}_{i\in {\mathbb {N}}}\), which converges to \(x(t)=\lim _{i\rightarrow +\infty } c_{\uplambda _{k_i}}(t)\) for \(t\in [t_0,T]\), i.e.
Therefore, for all \(t\in [t_0,T]\),
By Proposition 3, x(t) is a solution of (DS-0) on \([t_0,T]\). Since \(c_{\uplambda _{k_i}}(t)\in \hat{\mathcal {D}}\), \(i\in {\mathbb {N}}\), \(t\in [t_0,T]\), by the closedness of \({\hat{{\mathcal {D}}}}\), we obtain that \(x(t)\in \hat{\mathcal {D}}\), \([t_0,T]\). \(\square \)
Corollary 2
Let \(t_0\ge 0\), \(x_0\in {\hat{{\mathcal {D}}}}\setminus \{ {\bar{z}}\}\) be arbitrary fixed. Assume that assumptions (A), (B), (C) are satisfied. Then there exist \(R>0\), \(T^{\prime }>0\) such that for all \(T\in [t_0,T^\prime )\) there exists solution to (DS-0) on \([t_0,t_0+T]\) and it is unique in the class \({\mathbb {B}}_{t_0,x_0,T}^R\).
Proof
The proof follows from Propositions 2 and 4.\(\square \)
4.2 Contraction Mapping Principle for an Extended Vector Field F
We consider the following Cauchy problem
where \({\tilde{F}}: \mathcal {X}\rightarrow \mathcal {X}\) is such that \({\tilde{F}}(x)=F(x)\) for all \(x\in \hat{\mathcal {D}}\) and \({\tilde{F}}\) is continuous on \(\mathcal {X}\).
Lemma 1
([16, Lemma 1.2]) Let X, Y be Banach spaces, \(\Omega \subset X\) closed and \(f:\Omega \rightarrow Y\) continuous. Then there is a continuous extension \({\tilde{f}}:X\rightarrow Y\) of f such that \({\tilde{f}}(X)\subset \mathrm{conv}f(\Omega )\) (:=convex hull of \(f(\Omega )\)).
Proposition 5
Let \(t_0\ge 0\), \(x_0\in {\hat{{\mathcal {D}}}}\setminus \{ {\bar{z}} \}\). Then there exists \(T>0\) such that there exists a solution of (DS-1) on interval \([t_0,t_0+T]\).
Proof
For a given \(x\in C([t_0,t_0+T];\mathcal {X})\), define S[x] to be the function on \([t_0,t_0+T]\), given by
where \({\tilde{F}}\) is an extension of F given by Lemma 1. In the following, the boundedness of F or \({\tilde{F}}\) will be used as per their restrictive sense.
-
Step 1.
If \(x\in C([t_0,t_0+T];\hat{\mathcal {D}})\), then S(x) makes sense, since the right hand side is well defined.
-
Step 2.
Let us prove that \(S[x](\cdot )\in C([t_0,t_0+T];\mathcal {X})\) for any \(T>0\) and for \(x\in C([t_0,t_0+T];\mathcal {X})\). Assume \(t_1,t_2\in [t_0,t_0+T]\) with \(t_1<t_2\). It is evident that
$$\begin{aligned} S[x](t_2)= S[x](t_1)+\int _{t_1}^{t_2}{\tilde{F}}(x(\tau ))\,d\tau . \end{aligned}$$(20)Then the continuity of S[x] gives us as \(t_2\rightarrow t_1\),
$$\begin{aligned} \Vert S[x](t_2)-S[x](t_1)\Vert _{\mathcal {X}}=\left\| \int _{t_1}^{t_2}{\tilde{F}}(x(\tau ))\,d\tau \right\| _{\mathcal {X}}\le \max _{\tau \in [t_{0},t_{0+T}]}\Vert {\tilde{F}}(x(\tau ))\Vert _{\mathcal {X}}\cdot |t_{2}-t_{1}|. \end{aligned}$$Thus \(S:C([t_0,t_0+T];\mathcal {X})\longrightarrow C([t_0,t_0+T];\mathcal {X})\).
-
Step 3.
Denote \(C_0{:}{=}C([t_0,t_0+T];\mathcal {X})\). Consider the following form of a ball in \(C_0\), where we intend to look for a fixed point.
$$\begin{aligned} C_{0D}{:}{=}\left\{ x(t)\in C_0\mid \quad |x-x_0|_{C_{0}}\equiv \max _{t\in [t_0,t_0+T]}\Vert x(t)-x_0 \Vert _\mathcal {X}\le 1/2,\ x_0\in \hat{\mathcal {D}} \right\} . \end{aligned}$$Clearly, \(C_{0D}(\subseteq C_0)\) is a complete metric space with the metric induced by the norm of \(C_0\). Let us show that for choosing T small enough the operator S maps \(C_{0D}\) into itself and has a fixed point. We have, by Step 2, \(S[x](\cdot )\in C_0\), whenever \(x(\cdot )\in C_{0D}\). We now show that \(S[x](\cdot )\in C_{0D}\). It follows from (20) that
$$\begin{aligned} |S[x]-x_0|_{C_{0}}=\max _{t\in [t_0,t_0+T]}\Vert S[x](t)-x_0\Vert _{\mathcal {X}}=\max _{t\in [t_0,t_0+T]}\left\| \int _{0}^{t}\left( {\tilde{F}}(x(\tau ))\right) \,d\tau \right\| _\mathcal {X}\\ \le \max _{\tau \in [t_0,t_0+T]}\Vert {\tilde{F}}(x(\tau ))\Vert _{\mathcal {X}} T{=}{:} cT. \end{aligned}$$Therefore, for a choice of \(T\le 1/2c\),
$$\begin{aligned} |S[x]-x_0|_{C_{0}}\le 1/2. \end{aligned}$$Hence, \(S[x](\cdot )\in C_{0D}\) implies \(S:C_{0D}\rightarrow C_{0D}\) for every \(T\le 1/2c\).
-
Step 4
We shall show now that a sequence \(\{ x_n(\cdot )\}_{n\ge 1}\subseteq C_{0D}\) is a Cauchy sequence. Lets start with the initial point \(\{x_0\}\in \hat{\mathcal {D}}\) be given and define \(x_0(\cdot ){:}{=}x_0\). Denote \(x_1(\cdot ){:}{=}S[x_0](\cdot )\), and that \(x_{n+1}(\cdot ){:}{=}S[x_n](\cdot ),\ n=1,2,\dots \). Moreover, the followings hold successively.
$$\begin{aligned}&|x_{n+1}-x_{n}|_{C_{0}}=|S[x_n]-S[x_{n-1}]|_{C_{0}}\le cT|x_n - x_{n-1}|_{C_{0}}\\&\le \cdots \le (cT)^{n}|x_1-x_0|_{C_{0}}=(cT)^{n}\max _{t\in [0,T]}\Vert x_1(t)-x_0\Vert _{\mathcal {X}}\le c^n T^{n+1}\Vert F(x_{0})\Vert _{\mathcal {X}}. \end{aligned}$$Let \(m, n\in {\mathbb {N}}\) such that \(m > n\) and \(cT=\delta \in [0,1]\) then
$$\begin{aligned}&|x_m - x_n|_{C_{0}}\le |x_m - x_{m-1}|_{C_{0}}+|x_{m-1} - x_{m-2}|_{C_{0}}+\cdots +|x_{n+1}-x_n|_{C_{0}}\\&\le (\delta ^{m}+\delta ^{m-1}+\cdots +\delta ^{n+1})\frac{\Vert F(x_{0})\Vert _{\mathcal {X}}}{c} = \delta ^{n+1}\frac{\Vert F(x_{0})\Vert _{\mathcal {X}}}{c}\sum _{k=0}^{m-n}\delta ^k\\&\le \delta ^{n+1}\frac{\Vert F(x_{0})\Vert _{\mathcal {X}}}{c}\sum _{k=0}^{\infty }\delta ^k {\mathop {=}\limits ^{\delta <1}} \frac{\delta ^{n+1}\Vert F(x_{0})\Vert _{\mathcal {X}}}{c(1-\delta )}. \end{aligned}$$Let \(\varepsilon >0\). Moreover, since \(\delta \in [0, 1)\), we can find a large number \(N \in {\mathbb {N}}\) so that
$$\begin{aligned} \delta ^{N+1}< \varepsilon c(1-\delta )/\Vert F(x_{0})\Vert _{\mathcal {X}}. \end{aligned}$$Therefore, for \(m,n>N\in {\mathbb {N}}\),
$$\begin{aligned} |x_{m} - x_{n}|_{C_{0}}\le \varepsilon . \end{aligned}$$Hence, we have that the sequence \(\{x_n(\cdot )\}_{n\ge 1}\subseteq C_{0D}\) is Cauchy. Therefore, \(\{x_n(\cdot )\}_{n\ge 1}\) converges to some \({\bar{x}}(\cdot )\subseteq C_{0D}\), where \({\bar{x}}(\cdot )\) satisfies
$$\begin{aligned} {\bar{x}}(t)= x_{0}+\int _{0}^{t}{\tilde{F}}({\bar{x}}(\tau ))\,d\tau ,\quad \forall t\in [t_0,t_0+T]. \end{aligned}$$(21)By Proposition 3, \({\bar{x}}(\cdot )\) is a solution of (DS-1) for \(t\in [t_0,t_0+T]\).
\(\square \)
Remark 4
The proof of the above proposition will not work in the formulation of \({\tilde{F}}\) defined only on set \({\hat{{\mathcal {D}}}}\). This comes from the fact that the operator
may map a function \(x(\cdot )\) outside of \({\hat{{\mathcal {D}}}}\) for which we cannot apply Step 4. in the proof. However, in the case when \(x_0\in \text {int}\, {\hat{{\mathcal {D}}}}\), the following corollary holds.
Corollary 3
We have the following relationships between (DS) and (DS-1):
-
1.
if \(x_0\in \text {int} {\hat{{\mathcal {D}}}}\), then there exists a function \(x(\cdot )\in C^1([t_0,t_0+T];{\hat{{\mathcal {D}}}})\), which is a unique solution of (DS) and (DS-1) on \([t_0,t_0+T]\) for some \(T>0\);
-
2.
if \(x_0\in \partial {\hat{{\mathcal {D}}}}\) and assumption (B) holds, then the solution of (DS) is unique on \([t_{0},t_{0}+T_{1}]\) for some \(T_{1}>0\) and the solution of (DS-1) exists on \([t_{0},t_{0}+T_{2}]\) for some \(T_{2}>0\).
Proof
The proof will follow the lines of the proof of Proposition 5 up to Step 3 by replacing \({\tilde{F}}\) with F and then we proceed as follows.
We consider the following two cases.
-
Case 1.
Suppose \(x_0\in \hat{\mathcal {D}}\) such that \(\rho {:}{=}\inf \limits _{y\in \partial \hat{\mathcal {D}}}\Vert x_{0}-y\Vert _{H}{=}{:}\mathrm{dist}(x_{0},\partial \hat{\mathcal {D}})>0\).
-
Case 2.
Suppose \(x_0\in \hat{\mathcal {D}}\) such that \(\rho =0\). Then one can follow the proof of Proposition 5.
We look for a solution to (DS) for Case 1. Let us consider
Given the fact that \(x_0\in \hat{\mathcal {D}}{:}{=}\{ x\in {\mathcal {D}} \mid \Vert x-{\bar{w}}\Vert ^2\ge r >0 \}\), it implies \(\rho {:}{=}||x_0 -{\bar{w}}||>0\). Let us consider the following two possible cases for fixed \(r>0\),
-
(i)
if \(\rho >2r\), then consider the ball \(B_{\frac{\rho }{2}}(x_0)\subset \hat{\mathcal {D}}\);
-
(ii)
if \(\rho <2r\), then consider the ball \(B_{r-\frac{\rho }{2}}(x_0)\subset \hat{\mathcal {D}}\).
Thereafter, as in Step 4 of Proposition 5 we show the existence of Cauchy sequence in \(C_{0D}\).
-
Step 5.
Moreover, \(C_{0D}\) is a closed subset of \(C_0\). Indeed, it is an implication of the facts of continuity of S and
$$\begin{aligned} x_n\in {\mathcal {D}}\Rightarrow \lim \limits _{n\rightarrow \infty }x_n{=}{:}\hat{x}\in {\mathcal {D}}, \text {since\,\,} {\mathcal {D}} ~\text {is closed in}\ H. \end{aligned}$$ -
Step 6.
Finally, \({\mathcal {D}}\ni \hat{x}\) must be a fixed point of \(S:C_{0D}\rightarrow C_{0D}\). Indeed,
$$\begin{aligned} \hat{x}=\lim \limits _{n\rightarrow \infty }x_n=\lim \limits _{n\rightarrow \infty }S[x_{n-1}]{\mathop {=}\limits ^{\text {continuity of} ~S}}S\left[ \lim \limits _{n\rightarrow \infty }x_{n-1}\right] =S[{\hat{x}}]. \end{aligned}$$
Hence, we reach at the solution to (DS).\(\square \)
In the following example we show that the existence of solutions of (DS) is not guaranteed without assumption (B), however there are still solutions of (DS-1) due to Proposition 5.
Example 1
Let \(\mathcal {X}={\mathbb {R}}^2\), \({\bar{w}}=(-1,0)\), \({\bar{z}}=(1,0)\), \({\hat{{\mathcal {D}}}}={\bar{B}}((0,0),1)\setminus B((-1,0),1)\) and let \(F: {\hat{{\mathcal {D}}}}\rightarrow \mathcal {X}\) be defined as
Then assumption (A) and (C) is satisfied. Consider \(x_0=x(0)=(0,-1)\). Then there is no solution of (DS). By extending F(x) in a continuous way:
we obtain that one solution of (DS-1) is \(x(t)=(1-e^{-t},-1)\).
The following example shows that by considering (DS-1) under assumption (B) we may loose the uniqueness of solutions in the sense of Definition 1.
Example 2
Let \(\mathcal {X}={\mathbb {R}}^2\), \({\bar{w}}=(0,-1)\), \({\bar{z}}=(1,0)\), \({\hat{{\mathcal {D}}}}=[0,1]\times [-1,0]\setminus B((0,-1),1)\) and let \(F: {\hat{{\mathcal {D}}}}\rightarrow \mathcal {X}\) be defined as
Then assumptions (A), (B) and (C) are satisfied. Consider \(x_0=x(0)=(0,0)\). By extending F(x) in the continuous way:
We obtain that there are more solutions than one of the system (DS-1). For example:
5 Extendability of Solutions to (DS-0)
In this section we prove Theorem 1.
The proof is based on two lemmas, Lemmas 2 and 5 (see “Appendix”). The proposed approach follows the lines of Lecture 3 of the lecture notes [3]. The crucial assumptions are (A), (B) and (C) (see Lemma 2 below). For more general results and examples on the extendability of solutions, see e.g., [21] and the references therein.
Let
As a consequence of the results of Proposition 4 and Corollary 2 we have the following ’non-branching’ result.
Lemma 2
Suppose that assumptions (A), (B) and (C) are satisfied. Let \(x_1(t)\), \(x_2(t)\) be solutions to problem (DS) in the sense of Definition 1 on \({{\mathcal {T}}}_1\), \({{\mathcal {T}}}_2\), respectively. Then one of these solutions is a prolongation of the other (in particular, they coincide if \({\mathcal {T}}_1={\mathcal {T}}_2)\).
Proof
On the contrary, suppose that
Consider the set
Let us note that \(t_0\notin {\mathcal {T}}^{\ne }\) (by initial condition of (DS)). Furthermore, the set \({\mathcal {T}}^{\ne }\) is open in the set \({\mathcal {T}}_1\cap {\mathcal {T}}_2\), because it is an inverse image of \((t_0,+\infty )\) under continuous mapping \(t\rightarrow \Vert x_1(t)-x_2(t)\Vert \) defined on \({\mathcal {T}}_1\cap {\mathcal {T}}_2\).
Put
Let us note that \(T^{*}\notin {\mathcal {T}}^{\ne }\) (hence \(x_1(T^{*})=x_2(T^{*})\)). Indeed, if \(T^{*}=t_0\) then, \(t_0\notin T^{\ne }\) because \(x_1(t_0)=x_2(t_0)\).
If \(T^{*}>t_0\), then \(T^{*}\) is a boundary point of \({\mathcal {T}}^{\ne }\), so \(T^{*}\notin {\mathcal {T}}^{\ne }\) since \({\mathcal {T}}^{\ne }\) is open in \({\mathcal {T}}_1\cap {\mathcal {T}}_2\). This means that in any right-hand side half-neighbourhoodFootnote 2 of the point \(T^{*}\) there exists \(t_1>T^{*}\) such that \(t_1\in {\mathcal {T}}^{\ne }\subsetneq {\mathcal {T}}_1\cap {\mathcal {T}}_2\), and the intersection of this right-hand side half-neighbourhood with \({\mathcal {T}}^{\ne }\) is nonempty.
Take any \(\alpha >T^{*}\) and \(t_1\), \(t_1\in {\mathcal {T}}^{\ne }\cap [T^{*},\alpha )\). By Remark 2, functions \(x_1(t)\), \(x_2(t)\) are solutions to Cauchy problem
on interval \([T^{*},t_1]\). Since \(x_1(t), x_2(t)\in {\hat{{\mathcal {D}}}}\) for all \(t\in [T^{*},t_1]\) and the set \({{\hat{{\mathcal {D}}}}}\) is bounded, we have
By Corollary 2 , there exists \(T^\prime >t_0\), such that for any \(T\in (t_{0},T^\prime ]\), solution of the Cauchy problem (22) on interval \([T^{*},T^{*}+T]\) satisfying
is unique. Taking \(T=\min \{ T^\prime , t_1-T^{*} \}\) we come to a contradiction with Corollary 2, because, by (23) the condition (24) holds both for \(x_1(t)\) and \(x_2(t)\), but the functions \(x_1(t)\) and \(x_2(t)\) are different in any right-hand side half-neighbourhood of \(T^{*}\). \(\square \)
Now we are ready to prove Theorem 1.
Proof of Theorem 1
By Corollary 2, there exists solution of problem (DS-0) on some interval \([t_0, T] \) (\(T>t_0\)) in the class \({\mathbb {B}}_{t_0,x_{0},T}^R\) for some \(R>0\). By Lemma 2, for any two solutions of our problem (DS-0) on different intervals, one is the prolongation of the other.
Consider now, for any \(T>t_0\), all functions from \(C^1([t_0,T],{\hat{{\mathcal {D}}}})\). Among these functions there exist solutions of problem (DS-0) or not. Put
If \(T_0=+\infty \), there exists solution \({\tilde{x}}(t)\in C^1([t_0,+\infty ),{\hat{{\mathcal {D}}}})\) to problem (DS-0). Indeed, by taking a monotone increasing sequence \(T_n\rightarrow +\infty \) and the corresponding sequence of solutions \(\{x_n(t)\}\), by Lemma 2 we get, for all \(n\in {\mathbb {N}}\) solution \(x_{n+1}\) is the prolongation of \(x_n\). Hence, the function
is a solution defined on \([t_0,+\infty )\). Other solutions (which do not coincide with the restrictions of \({\tilde{x}}(t)\) on smaller intervals) do not exist by Lemma 2. In the rest of the proof, we show that this is the only possible case.
Consider now \(T_0<+\infty \). Then two cases are possible:
-
(a)
\(T_0\in {\mathbb {T}}\),
-
(b)
\(T_0\notin {\mathbb {T}}\).
In case (a) there exists a solution \(x(\cdot )\in C^1([t_0,T_0];{\hat{{\mathcal {D}}}})\) to problem (DS-0). But then, by Corollary 2, applied to our problem (DS-0) with \(t_0=T_0\) solution can be extended beyond \(T_0\) and both one-sided derivatives \({\dot{x}}_{-} (T_0)\) and \({\dot{x}}_{+} (T_0)\) exist and both equal \(F(x(T_0))\): left - by the definition of solutions on \([t_0,T_0]\), right - by the definition of solution to our problem with the beginning of the interval from \(T_0\). As a consequence, we get a solution on a larger interval and arrive to a contradiction with the definition of \(T_0\). This excludes case (a).
In case (b), by the arguments analogous to the case \(T_0=+\infty \), we get the existence and uniqueness of solutions x(t) of (DS-0) on the semi-interval \([t_0,T_0)\). Case (b) splits in two subcases:
-
1.
\(\limsup _{t\rightarrow T_0^{-}} \Vert x(t)\Vert = + \infty \) (i.e. solution is unbounded in any left-sided interval of \(T_0\)),
-
2.
\(\limsup _{t\rightarrow T_0^{-}} \Vert x(t)\Vert < + \infty \).
The subcase 1 is impossible in view of the boundedness of the set \({{\hat{{\mathcal {D}}}}}\). Now we show that the subcase 2 is also impossible.
Indeed, let the function x(t) be bounded on the whole half-interval \([0,T_0)\):
We have
However, from the equation (DS-0), it follows that the function x(t) is Lipschitz continuous with a constant M on \((t_0,T_0)\), since \(\Vert \dot{x}(t)\Vert \le M\) for all \(t\in (t_0,T_0)\).
Hence, by Lemma 5 (see “Appendix”), there exists the limit
Let us put \(Y_0\) to be the value of x(t) at \(T_0\). The obtained function Y(t) will be continuous from the left at \(T_0\). Then, by Lemma 3 (see “Appendix”), the function \(F(Y(T_0))\) is also continuous from the left at \(T_0\) and hence
Since for \(t<T_0\) we have \({\dot{x}}(t)=F(x(t))\), from the last formula we get
However, by Lemma about extendability at point (Lemma 4, see “Appendix”), it follows that the function x(t) can be extended from \([t_0,T_0)\) onto \([t_0,T_0]\) with preservation of continuous differentiability (let us denote the obtained function by Y(t)) and \({\dot{Y}}(t_0)=F(Y_0)\) and Y(t) is a solution on \([t_0,T_0]\). We arrive to a contradiction in the subcase 2 of case (b) (solutions on \([t_0, T_0]\) do not exist). \(\square \)
6 Behaviour of Trajectories at \(+\infty \)
In this section we prove Theorem 2 and provide other results concerning the convergence of trajectories.
Proposition 6
Let x(t), \(t\in [t_0,+\infty )\) be a solution of (DS-0). Suppose that there exists an increasing sequence \(\{t_n\}_{n\in {\mathbb {N}}}\), \(t_n\rightarrow +\infty \), such that \(x(t_n)\rightarrow {\bar{z}}\). Then \(x(t)\rightarrow {\bar{z}}\) as \(t\rightarrow +\infty \).
Proof
Let \(\{t_n\}_{n\in {\mathbb {N}}}\), \(t_n\rightarrow +\infty \) be such that \(x(t_n)\rightarrow {\bar{z}}\). We will show that for all \(\varepsilon >0\), for every increasing sequence \(\{s_n\}_{n\in {\mathbb {N}}}\), \(s_n\rightarrow +\infty \) there exists \(n_0\in {\mathbb {N}}\) such that for all \(n\ge n_0\), \(\Vert x(s_n)-{\bar{z}}\Vert \le \varepsilon \). Take any \(\varepsilon >0\) and an increasing sequence \(\{s_n\}_{n\in {\mathbb {N}}}\), \(s_n\rightarrow +\infty \).
We have
hence the function \(\Vert x(\cdot )-{\bar{w}}\Vert ^2\) is nondecreasing. Moreover, by (9) (see also Lemma 6) and convergence of \(x(t_n)\), for all \(\varepsilon ^\prime >0\) there exists \(n_0^\prime \in {\mathbb {N}}\) such that for all \(n>n_0^\prime \)
Take \(\varepsilon ^\prime =\varepsilon \) and \(n_0\) such that \(s_{n_0}\ge t_{n_0^\prime }\).
Then, by (9) and the fact that \(\Vert x(\cdot )-{\bar{w}}\Vert ^2\) is nondecreasing we obtain: for all \(n\ge n_0\)
\(\square \)
Now we give now the proof of Theorem 2.
Proof of Theorem 2
By (12), we have \({{\tilde{x}}}={{\bar{z}}}\), i.e., \( x(t_{n_k})\) converges weakly to \({{\bar{z}}}\). By (9), the following inequality holds for this subsequence
and hence
Since the norm is weakly lower semicontinuous, we also have
This and \((*)\) implies
Consequently, there is a subsequence \(t_{n_{k_m}}\) such that
Thus we have shown that for any sequence \(\{t_n\}_{n\in {\mathbb {N}}}\), \(t_n\rightarrow \infty \), there exists a subsequence \(\{t_{n_{k_m}}\}_{m\in {\mathbb {N}}}\) such that the above condition holds.
This means that \(\Vert x(t)-{{\bar{z}}}\Vert \rightarrow 0\) as \(t\rightarrow +\infty \).\(\square \)
In the next two propositions we propose variants of Theorem 2 in which we replace assumption (12) by other assumptions.
In the finite-dimensional case, the assertion of Theorem 2 can be obtained without assuming (12). Instead we need to assume a strengthened form (*) of the assumption (C) on vector field F.
Recall that the assumption (C) says that \(\langle F(x) | {\bar{w}} - x \rangle \le 0\) for all \(x\in {\hat{{\mathcal {D}}}}\).
Proposition 7
Let \(\mathcal {X}\) be a finite-dimensional space, let x(t), \(t\in [t_0,+\infty )\) be a solution of (DS-0) and assume that
Then \(\lim \limits _{t\rightarrow +\infty } x(t) ={\bar{z}}\).
Proof
Let \(g(t){:}{=}\frac{d}{dt}\Vert x(t)-{\bar{w}}\Vert ^{2}\), \(t\ge t_{0}.\) We start by showing that there exists a sequence \(\{t_k\}\), \(t_k\rightarrow +\infty \) such that \(\lim \limits _{k\rightarrow +\infty } g(t_k)= 0\).
On the contrary, suppose that there exist \(\varepsilon >0\) and \(t^\prime \ge t_0\) such that \(g(t)>\varepsilon \) for all \(t>t^\prime \). Hence, for all \(t> t^{\prime }\)
By taking
we arrive to \(\Vert x(t)-{\bar{w}}\Vert ^2>\Vert {\bar{z}}-{\bar{w}}\Vert \), i.e. \(x(t)\notin {\hat{{\mathcal {D}}}}\) - a contradiction. In this way, we proved that there exists a sequence \(\{t_k\}_{k\in {\mathbb {N}}}\) such that \(t_k\rightarrow +\infty \) and \(\lim \limits _{k\rightarrow +\infty } g(t_k)= 0\).
Since \(\mathcal {X}\) is finite-dimensional and \({\hat{{\mathcal {D}}}}\) is closed, bounded, hence compact. There exists a subsequence of \(\{t_k\}_{k\in {\mathbb {N}}}\), namely \(\{ t_{k_n} \}_{n\in {\mathbb {N}}}\) such that \(x(t_{k_n})\) converges and \(\lim \limits _{n\rightarrow +\infty } x(t_{k_n})={\tilde{x}}\in {\hat{{\mathcal {D}}}}\). Without loss of generality, we may assume that the sequence \(\{t_{k_n} \}_{n\in {\mathbb {N}}}\) is increasing.
By (8),
We have
hence, by assumption, \({\tilde{x}}={\bar{z}}\). Now the assertion follows from Proposition 6. \(\square \)
Remark 5
By examining the above proof, we see that the assertion of Proposition 7, remains true in infinite-dimensional Hilbert space \(\mathcal {X}\)
under additional assumption (to (*)) on F:
i.e., for any weakly convergent sequence \({\hat{{\mathcal {D}}}}\ni x_{n}\rightharpoonup {\bar{x}}\) we have \(\lim \limits _{n\rightarrow +\infty } F(x_n)=F({\bar{x}})\), where the limit is strong.
The need of using this additional assumption follows from the fact that if \(v_{n}\rightarrow v\) and \(u_{n}\rightharpoonup u\), then \(\langle v_{n}| u_{n}\rangle \rightarrow \langle v| u\rangle \). Indeed,
This fact allows to show (26).
The following proposition is a variant of Proposition 7 valid in infinite-dimensional Hilbert space under a more restrictive form (C**) below of condition (*).
Proposition 8
Let \(\mathcal {X}\) be an infinite-dimensional space and let x(t), \(t\in [t_0,+\infty )\) be a solution of (DS-0). Assume that for all \(t\in [t_0,+\infty )\) such that \(x(t)\ne {\bar{z}}\), we have
where \(\alpha :[t_{0},+\infty ) \rightarrow {\mathbb {R}}_{-}\) is an integrable function on any interval \([t_{0},T]\), \(T>t_0\) and there exist \(T'>t_0\), \(\sqrt{T^\prime }-\frac{t_0}{\sqrt{T^\prime }}> \frac{1}{2}(\Vert {\bar{w}}-{\bar{z}}\Vert ^2-\Vert x(t_0)-{\bar{w}}\Vert ^2)\) and \(\varepsilon \le \frac{-1}{\sqrt{T^\prime }}\)
such that \(\sup \limits _{[t_{0},T'] } \alpha (s)<\varepsilon \). Then \(\lim \limits _{t\rightarrow +\infty } x(t) ={\bar{z}}\).
Proof
Let us note that in the case when there exists \(t^\prime \in [t_0,+\infty )\) such that \(x(t^\prime )={\bar{z}}\), then \(x(t)={\bar{z}}\) for all \(t>t^\prime \) since \(F(x(t^\prime ))=F({\bar{z}})=0\).
Consider now the situation that \(x(t)\ne {\bar{z}}\) for any \(t\in [t_0,+\infty )\). By contradiction, suppose that \(x(t)\not \rightarrow {\bar{z}}\). Then, in view of Proposition 6, there exists \(\varepsilon >0\) such that \(x(t)\notin B({\bar{z}},\varepsilon )\) for all \(t\in [t_0,+\infty )\).
We have that for all \(t>T'\)
Thereby for such \(t> \frac{\Vert {\bar{w}}-{\bar{z}}\Vert ^2}{2c}+t_0 \) we arrive to a contradiction with \(x(t)\in {\hat{{\mathcal {D}}}}\subset {\mathcal {D}}\).\(\square \)
Proposition 9
Let \(\mathcal {X}\) be an infinite-dimensional space and let x(t), \(t\in [t_0,+\infty )\) be a solution of (DS-0). Assume that for all \(\varepsilon \) such that \(0<\varepsilon <\Vert x_0-{\bar{z}}\Vert \) we have \(\inf \limits _{x \in {\hat{{\mathcal {D}}}} \setminus B({\bar{z}},\varepsilon )}\langle F(x) | {\bar{w}}-x \rangle <0\). Then \(\lim \limits _{t\rightarrow +\infty } x(t) ={\bar{z}}\).
Proof
If there exists \(t'\in [0,+\infty )\) such that \(\langle F(x(t')) \mid w-x(t')\rangle =0\) then we are done - in view of assumptions of the Proposition, \(x(t^\prime )={\bar{z}}\), and by (9) , Proposition 1, \(x(t)={\bar{z}}\) for all \(t\ge t'\).
Suppose that for all \(t\in [0,+\infty )\) we have \(\langle F(x(t)) \mid w-x(t)\rangle <0\). For any \(t>t_0\)
Therefore \(\inf \limits _{s\in [t_0,t]} \langle F(x(s)) | w-x(s)\rangle \rightarrow 0\) as \(t\rightarrow +\infty \). Note that \(\alpha (s){:}{=}\langle F(x(s)) | w-x(s)\rangle \) is a continuous function on every \([t_0,t]\), \(t>t_0\). Hence, there exists an increasing sequence \(\{t_n\}_{n\in {\mathbb {R}}_+}\), \(t_n\rightarrow +\infty \) such that \( \langle F(x(t_n)) | x(t_n)-{\bar{w}}\rangle \rightarrow 0\). We claim that \(x(t_n)\rightarrow {\bar{z}}\).
Suppose on the contrary, that \(x(t_n)\not \rightarrow {\bar{z}}\) as \(n\rightarrow +\infty \). Then there exists \(\varepsilon >0\) and a subsequence \(\{t_{n_k}\}_{k\in {\mathbb {N}}}\) such that \(x(t_{n_k})\in {\hat{{\mathcal {D}}}} \setminus B({\bar{z}},\varepsilon )\). Since \(\inf \limits _{x \in {\hat{{\mathcal {D}}}} \setminus B({\bar{z}},\varepsilon )}\langle F(x) | {\bar{w}}-x \rangle <0\) we have that there exists \(c<0\) such that \(\langle F(x(t_{k_n})) | {\bar{w}}-x(t_{k_n}) \rangle <c\), a contradiction to \(\lim \limits _{k\rightarrow +\infty }\langle F(x(t_{k_n})) | {\bar{w}}-x(t_{k_n}) \rangle =0\) .
Hence \(x(t_n)\rightarrow {\bar{z}}\). Now the assertion follows from Proposition 6. \(\square \)
7 Projective Dynamical System
In this section, we give an example of the system (DS-0). Let \({\bar{w}}, {\bar{z}} \in \mathcal {X}\). We consider the projective dynamical system
where \({\mathbb {C}}: {\hat{\mathcal {D}}} \rightrightarrows \mathcal {X}\) is a multifunction such that:
- \((A^{\prime })\):
-
for all \(x\in {\hat{\mathcal {D}}}\), \({\bar{z}}\in {\mathbb {C}}(x)\) and \(P_{{\mathbb {C}}(x)}({\bar{w}})=x\) iff \(x={\bar{z}}\),
- \((B^{\prime })\):
-
for all \(x\in {\hat{{\mathcal {D}}}}\) we have \(P_{{\mathbb {C}}(x)}({\bar{w}})\in {\mathcal {D}}\),
- \((C^{\prime })\):
-
\(\langle P_{{\mathbb {C}}(x)}({\bar{w}})-x \mid {\bar{w}} - x \rangle \le 0\) for all \(x\in {\hat{\mathcal {D}}}\),
- \((D^{\prime })\):
-
for all \(x\in {\hat{\mathcal {D}}}\), \({\mathbb {C}}(x)\) is closed and convex.
Condition \((\mathrm{D}^{\prime })\) ensures that the projection onto \({\mathbb {C}}(x)\), \(x\in {\hat{{\mathcal {D}}}}\) is uniquely defined.
The condition \(\langle P_{{\mathbb {C}}(x)}({\bar{w}})-x \mid {\bar{w}} - x \rangle \le 0\) for all \(x\in {\hat{\mathcal {D}}}\) is equivalent to the condition that \(P_{{\mathbb {C}}(x)}({\bar{w}}) \in H({\bar{w}},x)\) for any \(x\in {\hat{\mathcal {D}}}\). This implies that for any \(x\in {\hat{\mathcal {D}}}\) and for any \(h\in {\mathbb {C}}(x)\) we have \(\langle h-x \mid {\bar{w}} - x \rangle \le 0\). The later implies \(P_{{\mathbb {C}}(x)}({\bar{w}}) \in H({\bar{w}},x)\). Therefore, \((\mathrm{C}^{\prime })\) is equivalent to the condition:
Remark 6
Let us comment on the conditions \((\mathrm{A}^{\prime })\), \((\mathrm{B}^{\prime })\), (C\(^{\prime })\). The condition (A\(^{\prime })\) is equivalent to saying that \({\bar{z}}\) is the only stationary point of the vector field \(F(x)=P_{{\mathbb {C}}(x)}({\bar{w}})-x\) inside the considered set \({\hat{\mathcal {D}}}\). The condition (B\(^{\prime })\) together with the convexity of set \({\mathcal {D}}\) ensures that for any \(\uplambda \in [0,1]\), and for any \(x\in {\hat{\mathcal {D}}}\subset {\mathcal {D}}\) it is \((1-\uplambda ) x + \uplambda P_{{\mathbb {C}}(x)}({\bar{w}}) \in {\mathcal {D}}\). The condition (C\(^\prime )\) ensures that \(P_{{\mathbb {C}}(x)}\in H({\bar{w}},x)\) and the function \(t\mapsto \Vert x(t)-{\bar{w}}\Vert \) is nondecreasing (see e.g., Proposition 1), where x(t) is a solution of (PDS) (whenever it exists).
As consequences of Theorems 1 and 2 we can formulate the following theorems.
Theorem 3
Suppose that (A\(^\prime )\), (B\(^\prime )\), (C\(^\prime )\), (D\(^\prime )\) holds. Assume that \(x\mapsto P_{{\mathbb {C}}(x)}({\bar{w}})\) is locally Lipschitz continuous on \({\hat{{\mathcal {D}}}}\setminus \{{\bar{z}}\}\) and continuous on \({\hat{\mathcal {D}}}\). Then the system (PDS) has a unique solution on \([t_0,+\infty )\).
Proof
First, let us show that (A), (B), (C) hold. (A\(^\prime )\) implies that \({\bar{z}}\) is the only stationary point of (PDS), hence (A) holds.
Recall that \({\mathcal {D}}\) is a closed, convex subset of \({\mathbb {B}}(\frac{{\bar{w}}+{\bar{z}}}{2},\frac{\Vert {\bar{w}}-{\bar{z}}\Vert }{2})\) and \({\hat{\mathcal {D}}}\) is given as in (10). By (D\(^\prime )\), the projection \(P_{{\mathbb {C}}(x)}({\bar{w}})\) is well defined for all \(x\in {\hat{\mathcal {D}}}\). By (B\(^\prime )\) and (C\(^\prime )\), assumption (B) is satisfied since for all \(x\in {\hat{{\mathcal {D}}}}\subset {\mathcal {D}}\) and for any \(h\in [0,1]\)
i.e. \(x+h(P_{{\mathbb {C}}(x)}({\bar{w}})-x)\in {\hat{{\mathcal {D}}}}\). Note that by taking \(h=1\) we obtain that \(P_{{\mathbb {C}}(x)}({\bar{w}})\in {\hat{{\mathcal {D}}}}\) for any \(x\in {\hat{\mathcal {D}}}\). Assumption (C\(^\prime )\) is equivalent to (C) for \(F(x)=P_{{\mathbb {C}}(x)}({\bar{w}})-x\). Observe that the mapping \(F(x)=P_{{\mathbb {C}}(x)}({\bar{w}})-x\) is bounded on \({\hat{\mathcal {D}}}\). Indeed for any \(x\in {\hat{\mathcal {D}}}\) we have
where \(R=\sup _{x\in {\hat{\mathcal {D}}}} \Vert x\Vert \). Now, system (PDS) is of the form of (DS-0) with \(F(x)=P_{{\mathbb {C}}(x)}({\bar{w}})-x\) and all the assumptions of Theorem 1 are satisfied. The assertion of the theorem follows from Theorem 1.\(\square \)
Theorem 4
Suppose that (A\(^\prime )\), (B\(^\prime )\), (C\(^\prime )\), (D\(^\prime )\) holds. Assume that \(x\mapsto P_{{\mathbb {C}}(x)}({\bar{w}})\) is locally Lipschitz continuous on \({\hat{{\mathcal {D}}}}\setminus \{{\bar{z}}\}\) and continuous on \({\hat{\mathcal {D}}}\). Let x(t) be a solution of (PDS). Assume that for every increasing sequence \(\{t_n\}_{n\in {\mathbb {N}}}\), \(t_n\rightarrow +\infty \)
Then \(x(t)\rightarrow {\bar{z}}\) as \(t\rightarrow +\infty \).
Proof
By the proof of Theorem 3, (PDS), assumptions (A), (B) and (C) are satisfied, and by assumption \(F(x)=P_{{\mathbb {C}}(x)}({\bar{w}})\) is locally Lipschitz continuous. Now the assertion follows from Theorem 2. \(\square \)
To investigate the local Lipschitzness of \(x \mapsto P_{{\mathbb {C}}(x)}({\bar{w}})\) on \({\hat{{\mathcal {D}}}}\setminus \{{\bar{z}}\}\) (and the continuity of \(x\mapsto P_{{\mathbb {C}}(x)}({\bar{w}})\) on \({\hat{{\mathcal {D}}}}\)) one should take into account the form of multifunction \({\mathbb {C}}\). Behaviour of the projection of a given \({\bar{w}}\) onto polyhedral multifunction \({\mathbb {C}}\) given by a finite number of linear inequalities and equalities were investigated in e.g. [8, Corollary 2], see also [25, Theorem 6.5].
Proposition 10
Let \({\mathbb {T}}: \mathcal {X}\rightarrow \mathcal {X}\), which appears in system (S), be a firmly quasinonexpansive operator, i.e.,
Assume that \({\bar{w}}\in \mathcal {X}\), \({\bar{w}}\notin \text {Fix}\, {\mathbb {T}}\) and \({\bar{z}}=P_{\text {Fix}\, {\mathbb {T}}}({\bar{w}})\), and let \({\mathcal {D}}={\mathbb {B}}(\frac{{\bar{w}}+{\bar{z}}}{2}, \frac{\Vert {\bar{w}}-{\bar{z}}\Vert }{2})\), \({\hat{\mathcal {D}}}=\{ x \in {\mathcal {D}} | \Vert x-{\bar{w}}\Vert ^2\ge r \}\) for some \(r\in (0,\Vert {\bar{w}}-{\bar{z}}\Vert ^2)\). Then the assumptions (A\(^\prime )\), (B\(^\prime )\), (C\(^\prime )\), (D\(^\prime )\) holds for the system (PDS) with \({\mathbb {C}}: {\mathcal {D}} \rightarrow \mathcal {X}\) defined as \({\mathbb {C}}(x){:}{=}H({\bar{w}},x)\cap H(x,{\mathbb {T}} x)\).
Proof
By [6, Corollary 4.25], we have
Assumption (A\(^\prime )\) follows from (28) i.e., \(\text {Fix}\, {\mathbb {T}} \ni {\bar{z}}\in H(x,{\mathbb {T}} x)\) for all \(x\in {\hat{\mathcal {D}}}\) and \(x\in \text {Fix}\, {\mathbb {T}} \cap {\hat{\mathcal {D}}} \iff x = {\bar{z}}\). Assumption B\(^\prime \) follows from fact that for any \(x\in {\hat{D}}\subset {\mathcal {D}}\), \({\bar{z}}\in {\mathbb {C}}(x)\), hence \({\bar{z}}\in H({\bar{w}},P_{{\mathbb {C}}(x)}({\bar{w}}))\) and therefore \(P_{{\mathbb {C}}(x)}({\bar{w}})\in {\mathbb {B}}(\frac{{\bar{w}}+{\bar{z}}}{2}, \frac{\Vert {\bar{w}}-{\bar{z}}\Vert }{2})={\mathcal {D}}\) (see (8)). Assumption (C\(^\prime \)) follows from fact that for any \(x\in {\mathcal {D}}\), \({\mathbb {C}}(x)\subset H({\bar{w}},x)\), hence \(P_{{\mathbb {C}}(x)}({\bar{w}})\in H({\bar{w}},x)\). Assumption (D\(^\prime )\) is satisfied since \(H({\bar{w}},x)\cap H(x,{\mathbb {T}} x)\) is closed, convex.\(\square \)
Depending upon the choice of the operator \({\mathbb {T}}\) in Proposition 10 we obtain dynamical systems of the form (PDS) related to different algorithms. Within our approach we encompass the following dynamical systems related to the following algorithms.
-
Ex 1.
When \({\mathbb {T}}: \mathcal {X}\rightarrow \mathcal {X}\) is firmly quasinonexpansive and \((Id-{\mathbb {T}})\) is demiclosed at 0, dynamical system (DS-0) corresponds to the best approximation algorithm for finding a point \({\bar{z}}\) from the set of fixed points of \({\mathbb {T}}\), i.e., for finding \({\bar{z}}\in \mathcal {X}\) such that \({\bar{z}}=P_{\text {Fix}{\mathbb {T}}}({\bar{w}})\) (see [6, Theorem 30.8]).
-
Ex 2.
When \({\mathbb {T}}=J_{A}\), where \(A:\mathcal {X}\rightrightarrows \mathcal {X}\) is maximally monotone, dynamical system (DS-0) corresponds to the best approximation algorithm for finding \(x\in \mathcal {X}\) such that \(0\in Ax\) (see [6, Corollary 30.11]). Let us recall that resolvent operator of A is defined as \(J_{A}: \mathcal {X}\rightarrow \mathcal {X}\), \(J_{A}=(Id-A)^{-1}\).
-
Ex 3.
When \({\mathbb {T}}=(1/2)(Id+J_{\gamma A}\circ (Id-\gamma B))\), \(A:\mathcal {X}\rightrightarrows \mathcal {X}\) is maximally monotone, \(B: \mathcal {X}\rightarrow \mathcal {X}\) is \(\beta \)-cocoercive, \(\gamma \in [0,2\beta ]\), dynamical system (DS-0) corresponds to the best approximation algorithm for finding \(x\in \mathcal {X}\) such that \(0\in Ax+Bx\) (see [6, Corollary 30.12]).
-
Ex 4.
When \({\mathbb {T}}:\mathcal {H}\times \mathcal {G}\rightarrow \mathcal {H}\times \mathcal {G}\) is defined as
$$\begin{aligned} \begin{aligned} {\mathbb {T}}(x)=P_{H(x)}(x),\ H(x){:}{=}\{ h \in \mathcal {H}\times \mathcal {G}\mid \langle h | s^{*}(x)\rangle \le \eta (x) \}, \end{aligned} \end{aligned}$$(29)and, for any \(x=(p,v^{*})\in \mathcal {H}\times \mathcal {G}\),
$$\begin{aligned} \left. \begin{aligned}&s^{*}(x) {:}{=} (a^{*}(x) + L^{*}b^{*}(x), b(x) - La(x)); \\&\eta (x) {:}{=}\langle a(x) | a^{*}(x) \rangle + \langle b(x) \mid b^{*}(x) \rangle ;\\&a(x) {:}{=} J_{\gamma A} (p - \gamma L^{*} v^{*}),\quad b(x) {:}{=} J_{\mu B} (L p + \mu v^{*}) ; \\&a^{*}(x) {:}{=} {}^{\gamma }A(p - \gamma L^{*} v^{*}),\quad b^{*}(x) {:}{=} {}^{\mu }{B}(Lp+ \mu v^{*}), \gamma ,\mu \in (0,1), \end{aligned} \right\} \end{aligned}$$(30)dynamical system (DS-0) corresponds to the best approximation algorithm for finding \((p,v^{*})\in \mathcal {H}\times \mathcal {G}\) such that
$$\begin{aligned} 0\in Ap+B(Lp) \quad \text {and}\quad 0\in -LA^{-1}(-Lv^{*})+B^{-1}v^{*} \end{aligned}$$(see [2]). Let us recall that for any \(\gamma >0\), \({}^\gamma A:\mathcal {H}\rightarrow \mathcal {H}\) is Yosida approximation of A, \({}^\gamma A = \frac{1}{\gamma } (Id - J_{\gamma }A)\).
For other multifunctions \({\mathbb {C}}\) and other properties of projections onto moving sets, see, e.g., [29, Theorem 3.1], [15, Theorem 3.10], [33, Theorem 2.1], [24, Proposition 5.2] and [25, Example 6.4].
Notes
Here, for \(f(x){:}{=}F(x)+x\) (so that \(F(x)=f(x)-x\)) we have that \({\bar{z}}\in H({\bar{w}},f(x))\).
By the right-hand side half-neighbourhood of a given \(t\in {\mathbb {R}}\) we mean an interval in a form \([t,\alpha )\) for any \(\alpha >t\).
References
Abbas, B., Attouch, H., Svaiter, B.F.: Newton-like dynamics and forward–backward methods for structured monotone inclusions in Hilbert spaces. J. Optim. Theory Appl. 161(2), 331–360 (2014). https://doi.org/10.1007/s10957-013-0414-5
Alotaibi, A., Combettes, P.L., Shahzad, N.: Best approximation from the Kuhn–Tucker set of composite monotone inclusions. Numer. Funct. Anal. Optim. 36(12), 1513–1532 (2015). https://doi.org/10.1080/01630563.2015.1077864
Anatolievich, P.A.: Abstract differential equations with applications in mathematical physics (2020). http://math.phys.msu.ru/Education/Special_courses/Abstract_differential_equations_with_applications_in_mathematical_physics/
Antipin, A.S.: Minimization of convex functions on convex sets by means of differential equations. Differentsialć nye Uravneniya 30(9), 1475–1486, 1652 (1994)
Bauschke, H.H., Combettes, P.: A weak-to-strong convergence principle for Fejér–Monotone methods in Hilbert spaces. Math. Oper. Res. 26(2), 248–264 (2001)
Bauschke, H.H., Combettes, P.L.: Convex analysis and monotone operator theory in Hilbert spaces, 2nd edn. CMS Books in Mathematics/Ouvrages de Mathématiques de la SMC. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-48311-5. With a foreword by Hédy Attouch
Bednarczuk, E.M., Minchenko, L.I., Rutkowski, K.E.: On Lipschitz-like continuity of a class of set-valued mappings. Optimization 69(12), 2535–2549 (2020). https://doi.org/10.1080/02331934.2019.1696339
Bednarczuk, E.M., Rutkowski, K.E.: On Lipschitz continuity of projections onto polyhedral moving sets. Appl. Math. Optim. (2020). https://doi.org/10.1007/s00245-020-09706-y
Boţ, R.I., Csetnek, E.R.: A dynamical system associated with the fixed points set of a nonexpansive operator. J. Dyn. Differ. Equ. 29(1), 155–168 (2017). https://doi.org/10.1007/s10884-015-9438-x
Boţ, R.I., Csetnek, E.R.: Convergence rates for forward–backward dynamical systems associated with strongly monotone inclusions. J. Math. Anal. Appl. 457(2), 1135–1152 (2018). https://doi.org/10.1016/j.jmaa.2016.07.007
Boţ, R.I., Csetnek, E.R., László, S.C.: A primal–dual dynamical approach to structured convex minimization problems. J. Differ. Equ. 269(12), 10717–10757 (2020). https://doi.org/10.1016/j.jde.2020.07.039
Bolte, J.: Continuous gradient projection method in Hilbert spaces. J. Optim. Theory Appl. 119(2), 235–259 (2003). https://doi.org/10.1023/B:JOTA.0000005445.21095.02
Brezis, H.: Functional Analysis, Sobolev Spaces and Partial Differential Equations. Universitext. Springer, New York (2011)
Dahmen, R., Glöckner, H.: Bounded solutions of finite lifetime to differential equations in Banach spaces. Acta Sci. Math. (Szeged) 81(3–4), 457–468 (2015)
Daniel, J.W.: The continuity of metric projections as functions of the data. J. Approx. Theory 12, 234–239 (1974). https://doi.org/10.1016/0021-9045(74)90065-3
Deimling, K.: Ordinary Differential Equations in Banach Spaces. Lecture Notes in Mathematics, vol. 596. Springer, Berlin (1977)
Dieudonné, J.: Deux exemples singuliers déquations différentielles. Acta Sci. Math. (Szeged) 12, 38–40 (1950)
Godunov, A.N.: Peanos theorem in an infinite-dimensional Hilbert space is false, even in a weakened formulation. Mat. Zametki 15, 467–477 (1974)
Hájek, P., Johanis, M.: On Peanos theorem in Banach spaces. J. Differ. Equ. 249(12), 3342–3351 (2010). https://doi.org/10.1016/j.jde.2010.09.013
Hájek, P., Vivi, P.: Some problems on ordinary differential equations in Banach spaces. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. RACSAM 104(2), 245–255 (2010). https://doi.org/10.5052/RACSAM.2010.16
Komornik, V., Martinez, P., Pierre, M., Vancostenoble, J.: Blow-up of bounded solutions of differential equations. Acta Sci. Math. (Szeged) 69(3–4), 651–657 (2003)
Ladas, G.E., Lakshmikantham, V.: Differential Equations in Abstract Spaces. Mathematics in Science and Engineering, vol. 85. Academic Press, New York (1972)
Li, T.Y.: Existence of solutions for ordinary differential equations in Banach spaces. J. Differ. Equ. 18, 29–40 (1975). https://doi.org/10.1016/0022-0396(75)90079-0
Mordukhovich, B.S., Nghia, T.T.A.: Full Lipschitzian and Hölderian stability in optimization with applications to mathematical programming and optimal control. SIAM J. Optim. 24(3), 1344–1381 (2014). https://doi.org/10.1137/130906878
Mordukhovich, B.S., Nghia, T.T.A., Pham, D.T.: Full stability of general parametric variational systems. Set-Valued Variat. Anal. (2018). https://doi.org/10.1007/s11228-018-0474-7
Pennanen, T.: Dualization of monotone generalized equations. Ph.D. thesis, University of Washington (1999)
Pennanen, T.: Dualization of generalized equations of maximal monotone type. SIAM J. Optim. 10(3), 809–835 (2000). https://doi.org/10.1137/S1052623498340448
Peypouquet, J., Sorin, S.: Evolution equations for maximal monotone operators: asymptotic analysis in continuous and discrete time. J. Convex Anal. 17(3–4), 1113–1163 (2010)
Shapiro, A.: Directional differentiability of metric projections onto moving sets at boundary points. J. Math. Anal. Appl. 131(2), 392–403 (1988). https://doi.org/10.1016/0022-247X(88)90213-2
Szép, A.: Existence theorem for weak solutions of ordinary differential equations in reflexive Banach spaces. Stud. Sci. Math. Hungar. 6, 197–203 (1971)
Teixeira, E.V.: Strong solutions for differential equations in abstract spaces. J. Differ. Equ. 214(1), 65–91 (2005). https://doi.org/10.1016/j.jde.2004.11.006
Xia, Y.S., Wang, J.: On the stability of globally projected dynamical systems. J. Optim. Theory Appl. 106(1), 129–150 (2000). https://doi.org/10.1023/A:1004611224835
Yen, N.D.: Lipschitz continuity of solutions of variational inequalities with a parametric polyhedral constraint. Math. Oper. Res. 20(3), 695–708 (1995). https://doi.org/10.1287/moor.20.3.695
Yorke, J.A.: A continuous differential equation in Hilbert space without existence. Funkcial. Ekvac. 13, 19–21 (1970)
Acknowledgements
We are grateful to Professor Nikolai Pavlovich Osmolovskii for his remarks as well as to anonymous referee for valuable comments which improve considerably the quality of the paper.
Open Access
This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
RND acknowledges the support of IBS PAN, Warszawa, Poland and the Czech Science Foundation, Project GJ19-14413Y.
Appendix: Auxiliary Lemmas
Appendix: Auxiliary Lemmas
Remark 7
Fact 8 implies that \({\mathcal {D}}\subset \bar{{\mathbb {B}}}(\frac{{\bar{w}}+{\bar{z}}}{2},\frac{\Vert {\bar{w}}-{\bar{z}}\Vert }{2})\), hence \({\mathcal {D}}\) is bounded. Moreover, this easily implies
that is, the pair \({\bar{z}}\), \({\bar{w}}\) realizes maximal distance between two points in \({\mathcal {D}}\) (the diameter of \({\mathcal {D}}\)).
Lemma 3
Let \(x(\cdot )\in C([a,b],{\hat{{\mathcal {D}}}})\); \([a,b]\subset {\mathbb {R}}_{+}\). Then the function \(f(t){:}{=}F(x(t))\) is continuous: \(f(\cdot )\in C([a,b];\mathcal {X})\)
Lemma 4
(About extendibility) Let x(t) be defined and differentiable in a continuous way in left-sided neighbourhood of \(t_0\), i.e.
and assume that the limit
exists and \(x_1\in F({\hat{{\mathcal {D}}}})\). Then
-
1.
x(t) is extendable in a continuous way to function \({\tilde{x}}(\cdot )\in C^1((t_0-\gamma ,t_0]);{\hat{{\mathcal {D}}}}) \);
-
2.
\(\dot{{\tilde{x}}}_\ell = x_1\) (where \(\ell \) denotes the left derivative of \(x(\cdot )\) at \(t_0\)).
Proof of Lemma 4
It follows from the existence of the left-hand limit that the derivative is bounded in some left-sided half-neighbourhood of \(t_0\):
By the weakened formula for finite increments, we obtain Lipschitz continuity of the function x(t) on \((t_{0}-\zeta , t_{0})\) with some constant L. Therefore, for the function x(t) , the Cauchy condition for the existence of the left derivative at time \(t_0\) is satisfied, and
Put
It is obvious that, the function constructed in this way is continuous on \((t_{0}-\zeta , t_{0}]\). Now, it is enough to show that \(\dot{{\tilde{x}}}_\ell (t_{0})=x_{1}\), i.e.
or
To use the formula of Newton-Leibniz we introduce a function
By (31) and (32), function z(t) is continuous on \((t_{0}-\zeta , t_{0}]\). We cannot yet claim that \(\dot{{\tilde{x}}}(t_{0})=x_{1}\), our aim is to prove it.
For any \(\delta \in (0,\zeta )\) we can rewrite the formula of Newton-Leibniz as
We take the limit with \(\delta \) tending to zero. Then on the one hand, \(x(t_{0}-\delta )\rightarrow x_{0}\) (see (34)). On the other hand,
because
Here, we used continuity of z(t), estimation (33) and, from fact (32) with (32), estimation \(\Vert z(t_{0})\Vert \le L\).
Taking the limit on both sides in (35) we obtain
Then from (35) and (36) we have
for all \(t\in [t_{0}-\varDelta t, t_{0}]\), where the function inside the integral is continuous. Now applying at the point \(t = t_0\) the theorem on differentiation of the integral with respect to the upper limit, we obtain
as required. \(\square \)
Lemma 5
Let x(t) be a Lipschtiz function on (a, b), \(a,b\in {\mathbb {R}}\) with Lipschitz constant L and values in Hilbert space \(\mathcal {X}\). Then the limit \(\lim _{t\rightarrow b^-} x(t)\) exists.
Proof
It is enough to show that x(t) has the Cauchy property at \(b^-\), in the sense that
Since x(t) is Lipschitz on (a, b) we have
Let us take any \(\varepsilon >0\) and \(\delta = \frac{\varepsilon }{2L}\). Then for any \(t_1,t_2\), \(0<b-t_1<\delta \), \(0<b-t_2<\delta \) we have
which proves (37).\(\square \)
Lemma 6
For all \(x\in {\mathcal {D}}\) we have
Proof
This follows from (9): we have
for all \(x\in {\mathcal {D}}\). \(\square \)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Bednarczuk, E.M., Dhara, R.N. & Rutkowski, K.E. Dynamical System Related to Primal–Dual Splitting Projection Methods. J Dyn Diff Equat 35, 3433–3458 (2023). https://doi.org/10.1007/s10884-021-10068-4
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10884-021-10068-4
Keywords
- Autonomous ordinary differential equations
- Locally Lipschitz vector field
- Existence and uniqueness of solutions
- Extendability of solutions
- Projected dynamical systems