Dynamical System Related to Primal–Dual Splitting Projection Methods

We introduce a dynamical system to the problem of finding zeros of the sum of two maximally monotone operators. We investigate the existence, uniqueness and extendability of solutions to this dynamical system in a Hilbert space. We prove that the trajectories of the proposed dynamical system converge strongly to a primal–dual solution of the considered problem. Under explicit time discretization of the dynamical system we obtain the best approximation algorithm for solving coupled monotone inclusion problem.


Introduction
Let H, G be Hilbert spaces.We consider the problem of finding p ∈ H such that 0 ∈ Ap + L * BLp, (P) where A : H → H, B : G → G are maximally monotone operators, L : H → G is a bounded, linear operator.Together with problem (P) we consider the dual problem formulated as finding v * ∈ G such that To problems (P) and (D) we associate Kuhn-Tucker set defined as The set Z is nonempty if and only if there exists a solution of the primal problem (P) and to the dual problem (D) (see [26,Corollary 2.12]).
Our aim in this paper is to investigate, for a given x 0 , w ∈ H × G, the following dynamical system, solution of which asymptotically approaches solution of (P)-(D), ẋ(t) = Q( w, x(t), Tx(t)) − x(t), t ≥ 0, where T : H × G → H × G, fixed point set of the operator T is Z (Fix T = Z), with Z defined by (Z) and is the projection P of the element w onto the set H( w, b) ∩ H(b, c) which is the intersection of two hyperplanes of the form Under explicit discretization with step size equal to one the system (S) becomes the best approximation algorithm for finding fixed point of T introduced in [2, Proposition 2.1] (see also [6,Theorem 30.8]), x n+1 = Q( w, x n , x n+1/2 ), n ∈ N.
(1.3) with the choice of x n+1/2 := T(x n ) and the starting point x 0 .The characteristic feature of this algorithm is the strong convergence of the sequence x n to a fixed point of T (see also [5]) In contrast to this, a dynamical system investigated, e.g. in [11], is related to other primal-dual method which exhibits weak convergence.
In case when A = ∂f , B = ∂g, f : H → R ∪ {+∞}, g : H → R ∪ {+∞} are proper convex, lower semicontinuous (l.s.c.) functions, the problem (P) (if solvable) reduces to finding a point p ∈ H solving the following minimization problem (see [27]) minimize p∈H f (p) + g(Lp) (1.4) and (D) reduces to finding a point v * ∈ G solving the following maximization problem First order dynamical systems related to optimization problems have been discussed by many authors (see, e.g., [1,4,9,10,12]).In those papers, a natural assumption is that the vector field F is globally Lipschitz and consequently, the existence and uniqueness of solutions to the dynamical system is guaranteed by classical results (see e.g.[13,Theorem 7.3]).For instance, Abbas, Attouch and Svaiter considered the following system in [1] ẋ(t) + x(t) = prox µΦ (x(t) − µB(x(t))), where Φ : H → R∪{+∞} is a proper, convex and l.s.c.function defined on a Hilbert space H, B : H → H is β-cocoercive operator and prox µΦ : H → H is a proximal operator defined as prox µΦ (x) = arg min y∈H {Φ(y) + 1 2µ x − y 2 }.
The most essential difference between (S) and the systems (1.6), (1.7) is that, in general, one cannot expect that the vector field Q given in (S) is globally Lipschitz with respect to variable x as it is the case of dynamical systems (1.6) and (1.7).
The contribution of the present investigation is as follows.We formulate the problem and provide preliminary facts in Sections 2 and 3, respectively.In Section 4 we prove the existence and uniqueness of solutions to dynamical system (S) by studying a more general problem (DS-0).Extendability of solutions to dynamical system (DS-0) is studied in Section 5.The behaviour at +∞ of solutions to (DS-0) is investigated in Section 6.In Section 7 we present applications of the results obtained for (DS-0) to projected dynamical systems (PDS).

Formulation of the problem
Suppose that the set Z given by (Z) is nonempty.Then for all x ∈ H×G, Z ⊂ H(x, Tx).Let w ∈ H × G and z = P Z ( w).Let us define an open ball in Hilbert space H × G centered at a ∈ H × G with some radius R > 0 as follows: We limit ourselves to a closed subset D ⊂ H × G such that for all x ∈ D we have z ∈ H( w, x).This latter conditions ensures that z is an equilibrium point of The fact that implies the following Therefore, we will limit our attention to Q( w, •, T(•)) given by (1.1) to be defined on D ⊂ B w+z 2 , w−z 2 .Let us note that for x = w we have H( w, x) = H × G.This motivates us to restrict our investigations to set D := D \ B( w, r) for some r > 0 such that D is nonempty.
System (S) is an autonomous dynamical system of the form where F : D → X , X -Hilbert space, is a continuous function, locally Lipschitz on D except a single point z ∈ D, and D is a closed and bounded set in X .Indeed, when F (x) := Q( w, x, Tx)−x, where T : H ×G → H ×G is defined as in (7.4) and Q : (H ×G) 3 → H ×G is defined in (1.2), the system (DS) reduces to (S).For other applications we refer the reader to Section 7.
A survey of existing results on solvability and uniqueness of solutions going beyond the classical Cauchy-Picard theorem from finite to infinite settings journey can be found in [20].
Main difficulties in investigating the existence to autonomous ODE in infinite-dimensional settings are due to the lack of compactness, see [22,Remark 5.1.1].For instance, the continuity of the right-hand side vector field F is not enough to obtain the counterpart of Peano's theorem in infinite-dimensional spaces [17], even in Hilbert spaces [34].
In [18] Godunov proved that in every infinite-dimensional Banach space there exists a continuous vector field F such that there is no solution to the related (DS) whereas the global Lipschitz condition, due to Cauchy-Lipschitz-Picard-Lindeloff, of the right-hand side field ensures the uniqueness and/or extendability of the solution, see [13,Theorem 7.3].Some attempts to weaken the global Lipschitz condition of the right-hand side vector field have been done in the context of the existence of solutions, see, e.g., [22,Theorem 5.1.1]and [19,23,30,31] and the references therein.It is observed that the local Lipschitzness of the vector filed allows to prove the local existence and uniqueness for the related problems.For instance, one can adapt [22,Theorem 5.1.1]to the case of autonomous differential system in the following way where K and M are nonnegative constants.Let α > 0 such that α ≤ β M .Then there exists one and only one (strongly) continuously differentiable function x(t) satisfying Let us note that Corollary 2.1 is non-applicable to system (DS) in case when x 0 / ∈ int D (see also Remark 4.7 below).Moreover, it was shown that local Lipschitzness condition is not enough to guarantee existence of trajectories on [t 0 , +∞) (see e.g., [21] and references therein).Instead of this, in Sections 4 and 5 we will be using modified standard techniques to show the existence and uniqueness of solutions to (DS).
In [14] a smooth vector field is constructed such that the respective autonomous dynamical system has a bounded maximal solution which is not globally defined.
In finite-dimensional settings, under the assumption of local Lipschitzness and some boundedness of the vector field, the existence and uniqueness of the trajectory on [t 0 , +∞) are shown in [32] by Xia and Wang.The authors applied their results to investigations of projected dynamical systems.

Preliminaries
In this section we formulate the system (S) (and (DS)) in the general form.Let w, z ∈ X and the associated norm in Hilbert space X be defined as Note that the condition (3.1) immediately implies that w and z are boundary points of the set D.
Let r be such that w − z 2 > r > 0. Throughout this paper, we consider set D related to D (see Figure 3.1): We consider the following Cauchy problem where F : D → X is a continuous function on D and locally Lipschitz on D \ {z} and bounded on D ( F (x) ≤ M , M > 0, x ∈ D).Moreover, we assume: (A) z is the only zero point of F in D, i.e.F (x) = 0 iff x = z.(B) for all x ∈ D, for all h ∈ [0, 1] we have x + hF (x) ∈ D Together with assumptions (A), (B) we also consider the following assumption related to the behaviour of projection 1 : 1 Here, for f (x) := F (x) + x (so that F (x) = f (x) − x) we have that z ∈ H( w, f (x)).
where P C(x) ( w) is the projection of w onto C(x), C : D ⇒ X is a multifunction given by C(x) = H( w, x) ∩ H(x, g(x)) (see formula (1.2) for H(•, •)) and g : X → X satisfies z ∈ H(x, g(x)) for all x ∈ X .Under a suitable assumption on g, the function F given by (3.3) is locally Lipschitz on D \ { w, z} (see e.g.[7]), continuous on D \ { w} and bounded on D.
Throughout the paper we use the following concept of solutions for dynamical systems (DS-0) and (DS) and its extendibility.
where F : A → X , A ⊆ X , on interval T is any function satisfying (1) initial condition x(t 0 ) = x 0 ; (2) equation ẋ(t) = F (x(t)) for all t ∈ T , where the differentiation is understood in the sense of strong derivative on space X , where at the boundary point of the interval T , in the case when it belongs to T , the differentiation is understood in the one-sided way.
is a solution of Cauchy problem (DS-0) on T 1 with initial condition x 0 = x(t 0 ).
The main results on the existence, uniqueness and extendibility of solutions to (DS) read as follows.
Theorem 3.5 (Existence and uniqueness).Suppose that assumptions (A), (B) and (C) hold.There exists a unique solution of (DS-0) on [t 0 , +∞).Theorem 3.6 (Behavior at +∞).Let x(t) be a solution of (DS-0) on [t 0 , +∞).Assume that for every increasing sequence {t n } n∈N , t n → +∞ where x(t) is a unique solution of (DS-0).Then the trajectory x(t) satisfies the condition lim t→+∞ x(t) = z, where convergence is understood in the sense of the norm of X .
Remark 3.7.Condition (3.4) can be seen as a continuous analogue of condition (iv) of Proposition 2.1 of [2].Namely, to obtain the strong convergence of the sequence generated by (1.3) it is assumed in Proposition 2.1 of [2] that for any strictly increasing sequence {k n } ⊂ N the following implication holds: x kn x =⇒ x = z.

Solutions to (DS-0) on closed intervals
In this section we consider the existence and uniqueness of solutions to (DS-0) defined on closed intervals, namely, [t 0 , T ], where T > t 0 is finite.In deriving existence and uniqueness results, we modify two standard approaches (with the help of assumptions (A)-(C)): Euler method (Section 4.1) and contraction mapping principle (Section 4.2).To this aim we will use the following proposition.Proposition 4.1.Assume that (C) holds.Then any solution x(t) of (DS-0) satisfies the condition x(t) − w is nondecreasing with respect to t ≥ t 0 .
Proof.Let us note that x(t) is continuously differentiable on [t 0 , +∞), therefore by (C) we have 1 2 Now we show the uniqueness of trajectories.
Proposition 4.2.Let t 0 ≥ 0 and let x 0 ∈ D \ {z}.Assume that assumptions (A) and (C) holds.If (DS-0) is solvable in a given interval [t 0 , T ], then the solution is unique on this interval.
Proof.Now we show the uniqueness of solutions of (DS-0) on Let us note that x 1 (t 0 ) = x 00 = x 2 (t 0 ).Consider two cases: Case 1 : Therefore, by assumption (A), Since x 1 and x 2 are Lipschitz functions with constant M there exists a neighbourhood By using Gronwall's inequality for the function where the integral is understood in the sense of Riemann and x(t) ∈ D, t ∈ I.

Let us define
Let us note that B t 0 ,T is a complete metric space due to the fact that D is a closed subset of a Hilbert space X .Moreover, in the sequel we consider on D the topology induced by the topology of the space.4.1.Euler method.We start with the following construction of Euler trajectories.
(1) If X is finite-dimensional, then for all T > t 0 there exists a solution of x(t) of (DS-0) on [t 0 , T ] in the class B t 0 ,T , (2) If X is infinite-dimensional, then there exists R > 0 and T > t 0 such that there exists a solution of x(t) of (DS-0) on [t 0 , T ] in the class B R t 0 ,x 0 ,T , Proof.Let us start with the initial settings.
(1) In case X is finite-dimensional we take any T > t 0 .Let us note that in this case D is closed and bounded, hence compact.Since F is continuous on D, F is uniformly continuous, i.e.
For all λ ∈ (0, 1] and any t ∈ [t 0 , T ] (t Let us note that in this case F is uniformly continuous on B(x 0 , R) ∩ D, i.e.

4.2.
Contraction mapping principle for an extended vector field F .We consider the following Cauchy problem where F : X → X is such that F (x) = F (x) for all x ∈ D and F is continuous on X .
Proof.For a given x ∈ C([t 0 , t 0 + T ]; X ), define S[x] to be the function on [t 0 , t 0 + T ], given by where F is an extension of F given by Lemma 4.1.In the following, the boundedness of F or F will be used as per their restrictive sense.
Step 1.If x ∈ C([t 0 , t 0 + T ]; D), then S(x) makes sense, since the right hand side is well defined.
Then the continuity of S[x] gives us as Step 3. Denote C 0 := C([t 0 , t 0 + T ]; X ).Consider the following form of a ball in C 0 , where we intend to look for a fixed point.
Clearly, C 0D (⊆ C 0 ) is a complete metric space with the metric induced by the norm of C 0 .Let us show that for choosing T small enough the operator S maps C 0D into itself and has a fixed point.
Remark 4.7.The proof of the above proposition will not work in the formulation of F defined only on set D. This comes from the fact that the operator may map a function x(•) outside of D for which we cannot apply Step 4. in the proof.However, in the case when x 0 ∈ int D, the following corollary holds.
Proof.The proof will follow the lines of the proof of Proposition 4.6 up to Step 3 by replacing F with F and then we proceed as follows.
We consider the following two cases.
Case 1. Suppose Case 2. Suppose x 0 ∈ D such that ρ = 0. Then one can follow the proof of Proposition 4.6.
We look for a solution to (DS) for Case 1.Let us consider Given the fact that Let us consider the following two possible cases for fixed r > 0, Thereafter, as in Step 4 of Proposition 4.6 we show the existence of Cauchy sequence in C 0D .
Step 5.Moreover, C 0D is a closed subset of C 0 .Indeed, it is an implication of the facts of continuity of S and Step 6.Finally, D x must be a fixed point of S : Hence, we reach at the solution to (DS).
In the following example we show that the existence of solutions of (DS) is not guaranteed without assumption (B), however there are still solutions of (DS-1) due to Proposition 4.6.
The following example shows that by considering (DS-1) under assumption (B) we may loose the uniqueness of solutions in the sense of Definition 3.2.

Extendability of solutions to (DS-0)
In this section we prove Theorem 3.5.The proof is based on two lemmas, Lemma 5.1 and Lemma 8.3 (see Appendix).The proposed approach follows the lines of Lecture 3 of the lecture notes [3].The crucial assumptions are (A), (B) and (C) (see Lemma 5.1 below).For more general results and examples on the extendability of solutions, see e.g., [21] and the references therein.
Let T = [t 0 ; T ), t 0 < T ≤ +∞ or T = [t 0 ; T ], t 0 < T < +∞.As a consequence of the results of Proposition 4.4 and Corollary 4.5 we have the following 'non-branching' result.Lemma 5.1.Suppose that assumptions (A), (B) and (C) are satisfied.Let x 1 (t), x 2 (t) be solutions to problem (DS) in the sense of Definition 3.2 on T 1 , T 2 , respectively.Then one of these solutions is a prolongation of the other (in particular, they coincide if T 1 = T 2 ).
Proof.On the contrary, suppose that Consider the set Let us note that t 0 / ∈ T = (by initial condition of (DS)).Furthermore, the set T = is open in the set T 1 ∩ T 2 , because it is an inverse image of (t 0 , +∞) under continuous mapping This means that in any right-hand side half-neighbourhood 2 of the point T * there exists t 1 > T * such that t 1 ∈ T = T 1 ∩ T 2 , and the intersection of this right-hand side half-neighbourhood with T = is nonempty.
Take any α > T * and t 1 , t 1 ∈ T = ∩ [T * , α).By Remark 3.4, functions x 1 (t), x 2 (t) are solutions to Cauchy problem 2 By the right-hand side half-neighbourhood of a given t ∈ R we mean an interval in a form [t, α) for any on interval [T x i (t) − x 1 (T * ) < +∞.
(5.2) By Corollary 4.5, there exists T > t 0 , such that for any T ∈ (t 0 , T ], solution of the Cauchy problem (5.1) on interval [T * , T * + T ] satisfying is unique.Taking T = min{T , t 1 − T * } we come to a contradiction with Corollary 4.5, because, by (5.2) the condition (5.3) holds both for x 1 (t) and x 2 (t), but the functions x 1 (t) and x 2 (t) are different in any right-hand side half-neighbourhood of T * .Now we are ready to prove Theorem 3.5.
of Theorem 3.5.By Corollary 4.5, there exists solution of problem (DS-0) on some interval [t 0 , T ] (T > t 0 ) in the class B R t 0 ,x 0 ,T for some R > 0. By Lemma 5.1, for any two solutions of our problem (DS-0) on different intervals, one is the prolongation of the other.
Consider now, for any T > t 0 , all functions from C 1 ([t 0 , T ], D).Among these functions there exist solutions of problem (DS-0) or not.Put (5.4) If T 0 = +∞, there exists solution x(t) ∈ C 1 ([t 0 , +∞), D) to problem (DS-0).Indeed, by taking a monotone increasing sequence T n → +∞ and the corresponding sequence of solutions {x n (t)}, by Lemma 5.1 we get, for all n ∈ N solution x n+1 is the prolongation of x n .Hence, the function is a solution defined on [t 0 , +∞).Other solutions (which do not coincide with the restrictions of x(t) on smaller intervals) do not exist by Lemma 5.1.In the rest of the proof, we show that this is the only possible case.Consider now T 0 < +∞.Then two cases are possible: In case (a) there exists a solution x(•) ∈ C 1 ([t 0 , T 0 ], D) to problem (DS-0).But then, by Corollary 4.5, applied to our problem (DS-0) with t 0 = T 0 solution can be extended beyond T 0 and both one-sided derivatives ẋ− (T 0 ) and ẋ+ (T 0 ) exist and both equal F (x(T 0 )): leftby the definition of solutions on [t 0 , T 0 ], right -by the definition of solution to our problem with the beginning of the interval from T 0 .As a consequence, we get a solution on a larger interval and arrive to a contradiction with the definition of T 0 .This excludes case ((a)).
In case (b), by the arguments analogous to the case T 0 = +∞, we get the existence and uniqueness of solutions x(t) of (DS-0) on the semi-interval [t 0 , T 0 ).Case (b) splits in two subcases: (1) lim sup t→T − 0 x(t) = +∞ (i.e.solution is unbounded in any left-sided interval of T 0 ), (2) The subcase 1 is impossible in view of the boundedness of the set D. Now we show that the subcase 2 is also impossible.Indeed, let the function x(t) be bounded on the whole half-interval [0, T 0 ): We have ∀t ∈ [t 0 , T 0 ) F (x(t)) ≤ M.However, from the equation (DS-0), it follows that the function x(t) is Lipschitz continuous with a constant M on (t 0 , T 0 ), since ẋ(t) ≤ M for all t ∈ (t 0 , T 0 ).Hence, by Lemma 8.3 (see Appendix), there exists the limit Let us put Y 0 to be the value of x(t) at T 0 .The obtained function Y (t) will be continuous from the left at T 0 .Then, by Lemma 8.1 (see Appendix), the function F (Y (T 0 )) is also continuous from the left at T 0 and hence Since for t < T 0 we have ẋ(t) = F (x(t)), from the last formula we get However, by Lemma about extendability at point (Lemma 8.2, see Appendix), it follows that the function x(t) can be extended from [t 0 , T 0 ) onto [t 0 , T 0 ] with preservation of continuous differentiability (let us denote the obtained function by Y (t)) and Ẏ (t 0 ) = F (Y 0 ) and Y (t) is a solution on [t 0 , T 0 ].We arrive to a contradiction in the subcase 2 of case (b) (solutions on [t 0 , T 0 ] do not exist).

Behaviour of trajectories at +∞
In this section we prove Theorem 3.6 and provide other results concerning the convergence of trajectories.Proposition 6.1.Let x(t), t ∈ [t 0 , +∞) be a solution of (DS-0).Suppose that there exists an increasing sequence {t n } n∈N , t n → +∞, such that x(t n ) → z.Then x(t) → z as t → +∞.
Proof.Let {t n } n∈N , t n → +∞ be such that x(t n ) → z.We will show that for all ε > 0, for every increasing sequence {s n } n∈N , s n → +∞ there exists n 0 ∈ N such that for all n ≥ n 0 , x(s n ) − z ≤ ε.Take any ε > 0 and an increasing sequence hence the function x(•) − w 2 is nondecreasing.Moreover, by (3.1) (see also Lemma 8.4) and convergence of x(t n ), for all ε > 0 there exists n 0 ∈ N such that for all n > n 0 Take ε = ε and n 0 such that s n 0 ≥ t n 0 .Then, by (3.1) and the fact that x(•) − w 2 is nondecreasing we obtain: for all n ≥ n 0 Now we give now the proof of Theorem 3.6.
of Theorem 3.6. .By (3.4), we have x = z, i.e., x(t n k ) converges weakly to z.By (3.1) , the following inequality holds for this subsequence Since the norm is weakly lower semicontinuous, we also have This and ( * ) implies lim inf Consequently, there is a subsequence t n km such that Thus we have shown that for any sequence {t n } n∈N , t n → ∞, there exists a subsequence {t n km } m∈N such that the above condition holds.This means that x(t) − z → 0 as t → +∞.
In the next two propositions we propose variants of Theorem 3.6 in which we replace assumption (3.4) by other assumptions.
In the finite-dimensional case, the assertion of Theorem 3.6 can be obtained without assuming (3.4).Instead we need to assume a strengthened form (C*) of the assumption (C) on vector field F .
Recall that the assumption (C) says that F (x) | w − x ≤ 0 for all x ∈ D. Proposition 6.2.Let X be a finite-dimensional space, let x(t), t ∈ [t 0 , +∞) be a solution of (DS-0) and assume that Proof.Let g(t) := d dt x(t) − w 2 , t ≥ t 0 .We start by showing that there exists a sequence {t k }, t k → +∞ such that lim k→+∞ g(t k ) = 0.
On the contrary, suppose that there exist ε > 0 and t ≥ t 0 such that g(t) > ε for all t > t .Hence, for all t > t we arrive to x(t) − w 2 > z − w , i.e. x(t) / ∈ D -a contradiction.In this way, we proved that there exists a sequence {t k } k∈N such that t k → +∞ and lim k→+∞ g(t k ) = 0.
Since X is finite-dimensional and D is closed, bounded, hence compact.There exists a subsequence of {t k } k∈N , namely {t kn } n∈N such that x(t kn ) converges and lim n→+∞ x(t kn ) = x ∈ D. Without loss of generality, we may assume that the sequence {t kn } n∈N is increasing.By 2.1, We have hence, by assumption, x = z.Now the assertion follows from Proposition 6.1.
Remark 6.3.By examining the above proof, we see that the assertion of Proposition 6.2, remains true in infinite-dimensional Hilbert space X under additional assumption (to (C*)) on F : F can be extended to conv D in such way that F : conv D → X is a weak-to-strong continuous on conv D, (W-S) i.e., for any weakly convergent sequence D x n x we have lim n→+∞ F (x n ) = F (x), where the limit is strong.The need of using this additional assumption follows from the fact that if v n → v and This fact allows to show (6.1).
The following proposition is a variant of Proposition 6.2 valid in infinite-dimensional Hilbert space under a more restrictive form (C**) below of condition (C*).Proposition 6.4.Let X be an infinite-dimensional space and let x(t), t ∈ [t 0 , +∞) be a solution of (DS-0).Assume that for all t ∈ [t 0 , +∞) such that x(t) = z, we have where α : [t 0 , +∞) → R − is an integrable function on any interval [t 0 , T ], T > t 0 and there exist Proof.Let us note that in the case when there exists t ∈ [t 0 , +∞) such that x(t ) = z, then x(t) = z for all t > t since F (x(t )) = F (z) = 0.
We have that for all t > T Thereby for such t > w−z 2 2c + t 0 we arrive to a contradiction with x(t) ∈ D ⊂ D.
Proposition 6.5.Let X be an infinite-dimensional space and let x(t), t ∈ [t 0 , +∞) be a solution of (DS-0).Assume that for all ε such that 0 < ε < x 0 − z we have inf x∈ D\B(z,ε) Proof.If there exists t ∈ [0, +∞) such that F (x(t )) | w −x(t ) = 0 then we are done -in view of assumptions of the Proposition, x(t ) = z, and by (3.1) , Proposition 4.1, x(t) = z for all t ≥ t .
Suppose that for all t ∈ [0, +∞) we have F (x(t)) | w − x(t) < 0. For any t > t 0 we have that there exists c < 0 such that F (x(t kn )) | w − x(t kn ) < c, a contradiction to Hence x(t n ) → z.Now the assertion follows from Proposition 6.1.

Projective dynamical system
In this section, we give an example of the system (DS-0).Let w, z ∈ X .We consider the projective dynamical system As consequences of Theorem 3.5 and Theorem 3.6 we can formulate the following theorems.
Proof.First, let us show that (A), (B), (C) hold.(A ) implies that z is the only stationary point of (PDS), hence (A) holds.
Recall that D is a closed, convex subset of B( w+z 2 , w−z 2 ) and D is given as in (3.2).By (D ), the projection P C(x) ( w) is well defined for all x ∈ D. By (B ) and (C ), assumption (B) is satisfied since for all x ∈ D ⊂ D and for any h ∈ [0, 1] Note that by taking h = 1 we obtain that P C(x) ( w) ∈ D for any x ∈ D. Assumption (C ) is equivalent to (C) for F (x) = P C(x) ( w) − x.Observe that the mapping F (x) = P C(x) ( w) − x is bounded on D. Indeed for any x ∈ D we have where R = sup x∈ D x .Now, system (PDS) is of the form of (DS-0) with F (x) = P C(x) ( w) − x and all the assumptions of Theorem 3.5 are satisfied.The assertion of the theorem follows from Theorem 3.5.
Proof.By the proof of Theorem 7.2, (PDS), assumptions (A), (B) and (C) are satisfied, and by assumption F (x) = P C(x) ( w) is locally Lipschitz continuous.Now the assertion follows from Theorem 3.6.
To investigate the local Lipschitzness of x → P C(x) ( w) on D \ {z} (and the continuity of x → P C(x) ( w) on D) one should take into account the form of multifunction C. Behaviour of the projection of a given w onto polyhedral multifunction C given by a finite number of linear inequalities and equalities were investigated in e.g.[8, Corollary 2], see also [25,Theorem 6.5].
Depending upon the choice of the operator T in Proposition 7.4 we obtain dynamical systems of the form (PDS) related to different algorithms.Within our approach we encompass the following dynamical systems related to the following algorithms.
Ex 1.When T : X → X is firmly quasinonexpansive and (Id − T) is demiclosed at 0, dynamical system (DS-0) corresponds to the best approximation algorithm for finding a point z from the set of fixed points of T, i.e., for finding z ∈ X such that z = P Fix T ( w) (see [6,Theorem 30.8]).Ex 2. When T = J A , where A : X ⇒ X is maximally monotone, dynamical system (DS-0) corresponds to the best approximation algorithm for finding x ∈ X such that 0 ∈ Ax (see [6,Corollary 30.11]).Let us recall that resolvent operator of A is defined as J A : X → X , J A = (Id − A) −1 .Ex 3. When T = (1/2)(Id + J γA • (Id − γB)), A : X ⇒ X is maximally monotone, B : X → X is β-cocoercive, γ ∈ [0, 2β], dynamical system (DS-0) corresponds to the best approximation algorithm for finding x ∈ X such that 0 ∈ Ax + Bx (see [6,Corollary 30.12]).Ex 4. When T : H × G → H × G is defined as (see [2]).Let us recall that for any γ > 0, γ A : H → H is Yosida approximation of A, γ A = 1 γ (Id − J γ A).For other multifunctions C and other properties of projections onto moving sets, see, e.g., [29,  (about extendibility) Let x(t) be defined and differentiable in a continuous way in left-sided neighbourhood of t 0 , i.e.
x(•) ∈ C 1 ((t 0 − γ, t 0 ), D) (8.1) and assume that the limit x 1 := lim  By the weakened formula for finite increments, we obtain Lipschitz continuity of the function x(t) on (t 0 − ζ, t 0 ) with some constant L. Therefore, for the function x(t), the Cauchy condition for the existence of the left derivative at time t 0 is satisfied, and ∃ x 0 = lim t→t 0 − x(t). (8.4)

Put
x(t) = x(t), t ∈ (t 0 − γ, t 0 ); x 0 t = t 0 .It is obvious that, the function constructed in this way is continuous on (t 0 − ζ, t 0 ].Now, it is enough to show that ẋ (t 0 ) = x 1 , i.e.To use the formula of Newton-Leibniz we introduce a function

Remark 7 . 1 .
PDS)where C : D ⇒ X is a multifunction such that:(A ) for all x ∈ D, z ∈ C(x) and P C(x) ( w) = x iff x = z, (B ) for all x ∈ D we haveP C(x) ( w) ∈ D, (C ) P C(x) ( w) − x | w − x ≤ 0 for all x ∈ D,(D ) for all x ∈ D, C(x) is closed and convex.Condition (D ) ensures that the projection onto C(x), x ∈ D is uniquely defined.The condition P C(x) ( w) − x | w − x ≤ 0 for all x ∈ D is equivalent to the condition that P C(x) ( w) ∈ H( w, x) for any x ∈ D. This implies that for any x ∈ D and for any h ∈ C(x) we have h − x | w − x ≤ 0. The later implies P C(x) ( w) ∈ H( w, x).Therefore, (C ) is equivalent to the condition:∀x ∈ D ∀h ∈ C(x), h − x | w − x ≤ 0.Let us comment on the conditions (A ), (B ), (C ).The condition (A ) is equivalent to saying that z is the only stationary point of the vector field F (x) = P C(x) ( w)−x inside the considered set D. The condition (B ) together with the convexity of set D ensures that for any λ ∈ [0, 1], and for any x ∈ D ⊂ D it is (1−λ)x+λP C(x) ( w) ∈ D. The condition (C ) ensures that P C(x) ∈ H( w, x) and the function t → x(t) − w is nondecreasing (see e.g., Proposition 4.1), where x(t) is a solution of (PDS) (whenever it exists).