New strong convergence method for the sum of two maximal monotone operators

This paper aims to obtain a strong convergence result for a Douglas–Rachford splitting method with inertial extrapolation step for finding a zero of the sum of two set-valued maximal monotone operators without any further assumption of uniform monotonicity on any of the involved maximal monotone operators. Furthermore, our proposed method is easy to implement and the inertial factor in our proposed method is a natural choice. Our method of proof is of independent interest. Finally, some numerical implementations are given to confirm the theoretical analysis.


Introduction
Let H be a real Hilbert space with scalar product ⟨., .⟩ and induced norm ‖ ⋅ ‖ . An operator A ∶ H → 2 H with domain D(A) is said to be monotone if A is maximal monotone if its graph is not properly contained in the graph of any other monotone operators.
Let us consider the inclusion problem of the form where A and B are set-valued maximal monotone operators in H. Throughout this paper, we assume that the set of solution, denoted by S, of (1) is nonempty. The proximal point algorithm (PPA) is the well-known method for solving inclusion problem (1) (see, Lions and Mercier 1979;Martinet 1970;Moreau 1965;Rockafellar 1976). The PPA for solving (1) is expressed as where > 0 is the proximal parameter. Now, implementing PPA (2) to solve (1) requires computing the resolvent operator of the sum A + B exactly. This is very difficult to implement and could be as hard as the original inclusion problem (1). This difficulty has led many authors to consider the operator splitting approach to solve (1). The aim of operator splitting method is to circumvent the computation of J A+B when implementing (2) but rather consider the computation of J A and J B (Eckstein and Bertsekas 1992;Glowinski and Le Tallec 1989;Lions and Mercier 1979).
When both A and B are single-valued linear operators in (1), Douglas and Rachford (1956) proposed the following method for solving heat conduction problems: We can eliminate u k+ 1 2 in (3) above and obtain Define z k ∶= J B −1 u k ⇔ u k = J B (z k ) . Then, (4) reduces to the following splitting method (known as Douglas-Rachford splitting method) (5) z k+1 = J A (2J B − I)z k + (I − J B )z k . Boţ et al. (2015) gave the following method for solving (1): z 0 = z 1 ;

Motivations and contributions
where { k } is a non-decreasing sequence with 0 ≤ k ≤ < 1, ∀k ≥ 1 and , , > 0 such that  Boţ et al. (2015) obtained weak convergence analysis of algorithm (6) for finding common zeros of the sum of two maximal monotone operators and illustrate their results through some numerical experiments. The same conditions (a) and (b) above have been used in recent works in Dong et al. (2018), Shehu (2018) and other associated papers. When k = 0 , it was proved in Bauschke and Combettes (2011, Thm. 25.6(vii)) that {z k } in (6) converges strongly to a solution of (1) if either A or B is uniformly monotone (A is uniformly monotone if ⟨x − y, u − v⟩ ≥ (‖x − y‖), ∀u ∈ Ax, v ∈ Ay , where ∶ [0, ∞) → [0, ∞) is increasing and vanishes only at zero) on every nonempty bounded subset of its domain. When k = 1 and B ≡ 0 , then (6) reduces to the inertial proximal point method proposed by Alvarez and Attouch (2001). In this case, Alvarez and Attouch (2001) assumed that the inertial factor k satisfies the condition 0 ≤ k ≤ k+1 ≤ < 1 3 in their convergence result. However, the assumption on the inertial factor k imposed in (6) does not appear as simple as condition 0 ≤ k ≤ k+1 ≤ < 1 3 , assumed by Alvarez and Attouch (2001).
Problems arise in infinite dimensional spaces in many disciplines like economics, image recovery, electromagnetics, quantum physics, and control theory. For such problems, strong convergence of sequence of iterates z k of the proposed iterative procedure is often much more desirable than weak convergence. This is because strong convergence translates the physically tangible property that the energy ‖z k − z‖ of the error between the iterate z k and a solution z eventually becomes arbitrarily small. Another importance of strong convergence is also underlined in the works of Güler (1991), where a convex function f is minimized through the proximal point algorithm. Güler (1991) showed that the rate of convergence of the value sequence {f (z k )} is better when {z k } converges strongly than when it converges weakly. For more details on importance of strong convergence, please see Bauschke and Combettes (2001).
Strong convergence methods for solving problem (1) when B is set-valued maximal monotone operator and A is a single-valued -inverse strongly monotone operator (i.e., ⟨Ax − Ay, x − y⟩ ≥ ‖Ax − Ay‖ 2 , ∀x, y ∈ H ) have been studied extensively in the literature (see, for example, Boikanyo 2016;Chang et al. 2019;Cholamjiak 2016;Cholamjiak et al. 2018;Dong et al. 2017;Gibali and Thong 2018;López et al. 2012;Riahi et al. 2018;Shehu 2016Shehu , 2019Shehu and Cai 2018;Thong and Cholamjiak 2019;Wang and Wang 2018). However, there are still few results on the strong convergence results concerning more general case of problem (1) when A and B are set-valued maximal monotone operators. This is the gap that this paper aims to fill in.
Our aim in this paper is to prove the strong convergence analysis of the inertial Douglas-Rachford splitting method with different conditions from the conditions (a) and (b) assumed in Boţ et al. (2015) without assuming uniform monotonicity on either maximal monotone operator A or B. Furthermore our assumptions on the inertial factor k here in this paper are the same assumptions in the results of Alvarez and Attouch (2001) (which is a special case of our result). In summary, • We prove strong convergence analysis of inertial Douglas-Rachford splitting method without using the conditions (a) and (b) assumed in Boţ et al. (2015). Our inertial conditions are the same as the ones assumed in Alvarez and Attouch (2001) for finding zero of a set-valued maximal monotone operator using inertial proximal method. • We obtain strong convergence results without assuming that any of the involved maximal monotone operators is uniformly monotone on every nonempty bounded subset. Our strong convergence results are much more general than the current ones in Bauschke and Combettes (2011) and other associated works where strong convergence is obtained. • Some numerical examples are given to confirm the importance of the presence of inertial term in our method.
The paper is therefore organized as follows: We first recall some basic explanations of Douglas-Rachford splitting method and introduce our inertial Douglas-Rachford splitting method alongside some results in Sect. 2. The analysis of strong convergence of our proposed method is then investigated in Sect. 3. We give numerical implementations in Sect. 4 and conclude with some final remarks in Sect. 5.

Preliminaries
Let us first recall some basics that are required to derive and analyze the Douglas-Rachford splitting method; for the corresponding details, we refer, Eckstein and Bertsekas (1992), He and Yuan (2015), Svaiter (2011) and Zhang and Cheng (2013). Let > 0 be a fixed parameter, and let us denote by the resolvents of A and B, respectively, which are known to be firmly nonexpansive (operator T is firmly-nonexpansive if ⟨x − y, Tx − Ty⟩ ≥ ‖Tx − Ty‖ 2 , ∀x, y ∈ H ). Furthermore, let us write for the corresponding reflections (also called Cayley operators), and note that the reflections are nonexpansive operators (T is nonexpansive if ‖Tx − Ty‖ ≤ ‖x − y‖, ∀x, y ∈ H). In Eckstein and Bertsekas (1992) and He and Yuan (2015), the maximal monotone operator S ,A,B is defined as It was shown in Eckstein and Bertsekas (1992) that the Douglas-Rachford splitting method (5) can be converted to By Eckstein and Bertsekas (1992, Thm. 5), for any given zero z * of S ,A,B , J B (z * ) is a zero of A + B . Therefore, J B (z * ) is a solution of (1) whenever z * satisfies Consequently, the Douglas-Rachford splitting method (5) can be rewritten as where e(z k , ) ∶= 1 2 (z k − R A oR B (z k )).
We next recall some properties of the projection. For any point u ∈ H , there exists a unique point P C u ∈ C such that P C is called the metric projection of H onto C. We know that P C is a nonexpansive mapping of H onto C. It is also known that P C satisfies In particular, we get from (10) that Furthermore, P C x is characterized by the properties

This characterization implies that
The following result is obtained (Shehu et al. 2020) but we give the proof for the sake of completeness.
Lemma 2.1 Let S ⊆ H be a nonempty, closed, and convex subset of a real Hilbert space H. Let u ∈ H be arbitrarily given, z ∶= P S u , and Proof By definition, it follows immediately that z ∈ Ω ∩ S . Conversely, take an arbitrary y ∈ Ω ∩ S . Then, in particular, we have y ∈ Ω , and it therefore follows that Using z = P S u together with the characterization (12), we also have In particular, since y ∈ S , we therefore have ⟨u − z, z − y⟩ ≥ 0 . Hence (14) implies ‖y − z‖ 2 ≤ 0 , so that y = z . This completes the proof. ◻ Finally, we state some basic properties that will be used in our convergence theorems.

Analysis of the convergence
For the rest of this paper, we assume that Lemma 3.1 Let {z k } be the sequence generated by (9). For any z satisfying (7), we have Proof By (9), we get We know that e(y k , ) = 1 2 (y k − R A oR B (y k )) , where > 0 is the proximal parameter, is firmly-nonexpansive (see, He and Yuan 2015, lem. 2.2). Thus, In particular, for z = R A oR B (z) , we obtain Putting (17) into (16), we have Recall that k e(y k , ) = y k − z k+1 implies that Using (19) in (18) and the condition that 0 < ≤ k ≤ 1 , we have ◻ Lemma 3.2 Let {z k } be the sequence generated by (9). For any z satisfying (7), we have Proof Moreover, from the definition of y k , we obtain using Lemma 2.2 (a) that and, similarly, with z replaced by z k+1 in the previous formula, Substituting (21) and (22) into (15) and eliminating identical terms, we get Therefore, we obtain where the last identity exploits Lemma 2.2 (a) twice. We therefore have Using the fact that { k } is non-decreasing and { k } is non-increasing, we then obtain which is the desired inequality. ◻ Our first central result below shows that the sequence {z k } generated by (9) is bounded.

Lemma 3.3
The sequence {z k } generated by (9) is bounded.
Proof A simple re-ordering of (20) implies that where the equality uses once again Lemma 2.2 (a). Hence, by cancellation, re-ordering, and neglecting a non-positive term on the right-hand side, we obtain Then (29) consequently implies that Since { k } is non-increasing in (0,1), this implies It then follows from (28) and (30) that Since k ≤ k+1 , k+1 = k e k+1 and { k } is non-increasing in (0,1), we therefore get which can be rewritten as (since { k } is non-increasing in (0,1)) Since the sequence { k } belongs to the interval [0, ] , we have Using lim k→∞ k = 0 and ∈ [0, 1∕3) , it follows that the right-hand side is eventually bounded from below by a positive number, i.e., there is a constant > 0 such that 1 − k+1 3 + 2(e k+1 − 1) − k ≥ for all k ∈ ℕ sufficiently large, say, for all k ≥ k 0 . Hence, we have This implies that for k ≥ k 0 , Thus, dividing by k+1 and omitting a non-positive term, we get where t k ∶= ∑ k i=1 i . Since k ∈ (0, 1) for all k ∈ ℕ , it is easy to see that k e t k+1 ≤ e 2 (e t k − e t k−1 ) for all k ≥ 2 , so that which, by (32), e −t k+1 ≤ 1 , and the fact that { k } belongs to the interval [0, ] ⊂ [0, 1 3 ) , yields Using (33), ∈ [0, 1) , and the convergence of the geometric series, a simple calculation gives Using once again that < 1 , this shows that {z k } is bounded. ◻ Next, we formulate a simple lemma that turns out to be useful for proving the strong convergence result.
Lemma 3.4 Let {z k } be the sequence generated by (9). Define for all k ∈ ℕ . Then u k ≥ 0 for all k ∈ ℕ.
Proof Since { k } is non-decreasing with 0 ≤ k < 1 3 , and by Lemma 2.2 (a), we have and this completes the proof. ◻ Before we prove our main strong convergence result, we state another preliminary result which provides sufficient conditions for the strong convergence of the sequence {z k } generated by our method (9). In our strong convergence result, we will then show that these sufficient conditions automatically hold.
Lemma 3.5 Let {z k } be the sequence generated by (9). Assume that and Then the entire sequence {z k } converges strongly to the solution z.

Proof By assumption, we have
We claim that this already implies from which the strong convergence of the entire sequence {z k } to z follows immediately. Assume this limit does not hold. Then there is a subset K ⊆ ℕ and a constant > 0 such that Since lim k→∞ ‖z k+1 − z k ‖ = 0 by the assumption and 0 ≤ < 1 , then (recall that if {a k } and {b k } are bounded sequences in ℝ and one of either {a k } or {b k } converges, then lim sup k→∞ (a k + b k ) = lim sup k→∞ a k + lim sup k→∞ b k ) Using (34) and k ≤ < 1 , we get Consequently, we have lim sup k∈K ‖z k − z‖ ≤ 0. Since lim inf k∈K ‖z k − z‖ ≥ 0 obviously holds, it follows that lim k∈K ‖z k − z‖ = 0. This implies [by (35)] for all k ∈ K sufficiently large, a contradiction to the assumption that lim k→∞ ‖z k+1 − z k ‖ = 0. This completes the proof. ◻ We are now ready to obtain strong convergence of the sequence {z k } generated by (9) to an element of S.
Theorem 3.6 The sequence {z k } generated by (9) strongly converges to z, where z = P S z 0 .
Proof Let u k denote the nonnegative number defined in Lemma 3.4, and let us apply Lemma 3.2. We obtain from (20) that We now consider two cases. Case 1 Suppose {u k } is eventually a monotonically decreasing sequence, i.e. for some k 0 ∈ ℕ large enough, we have u k+1 ≤ u k for all k ≥ k 0 . Then, since u k is nonnegative for all k ∈ ℕ by Lemma 3.4, we obviously get that {u k } is a convergent sequence. Consequently, it follows that lim k→∞ u k = lim k→∞ u k+1 . Since {z k } is bounded by Theorem 3.3, there exists M > 0 such that 2�⟨z k − z, z k − z 0 ⟩� ≤ M. Moreover, it follows that there exist N ∈ ℕ and 1 > 0 such that 1 − 3 k+1 − k ≥ 1 for all k ≥ N . Therefore, for k ≥ N , we obtain from (36) that Hence Together with k → 0 , the boundedness of {z k } , and the convergence of {u k } , we therefore obtain from the definition of u k that the limit exists and is equal to lim k→∞ u k+1 . In particular, Lemma 3.4 therefore implies that ≥ 0 . We will show that = 0 holds; then (37) together with the fact that k ≤ < 1 for all k ∈ ℕ yields the strong convergence of the sequence {z k } to the solution z.
By contradiction, assume that > 0 . Since {z k } is bounded by Theorem 3.3, it is easy to see that we can choose a subsequence {z k j } which converges weakly to an element p ∈ H and such that We show that p ∈ S . Observe that the updating rule for y k implies This yields Let Ty ∶= 1 2 y + 1 2 R A oR B (y), y ∈ H . Then it is clear that T is nonexpansive and z ∈ F(T) ∶= {x ∈ H ∶ x = Tx} if and only if z = R A oR B (z) . Similarly, it is easy to see that e(y k , ) = 1 2 (y k − R A oR B (y k )) = y k − Ty k . Therefore, Demiclosedness Principle of T implies that p ∈ F(T) . Hence, p ∈ S . This implies that where the inequality follows from the characterization (12) of a projection applied to z = P S z 0 and p ∈ S . Since (37) yields and since > 0 by assumption, we have for some sufficiently large k 1 ∈ ℕ . Using the identity we therefore get from (38). Using once again the assumption that > 0 , this implies for some sufficiently large k 2 ∈ ℕ, k 2 ≥ k 1 . From (36), we therefore obtain This implies lim k→∞ ‖y k − Ty k ‖ = lim k→∞ ‖e(y k , )‖ = 0.
where the second inequality follows from Lemma 3.4. Since > 0 , this gives the summability of the sequence { k } , a contradiction to our assumption. Hence we must have = 0 , and this yields the strong convergence of the sequence {z k } to z. Case 2 Assume that {u k } is not eventually monotonically decreasing. Then let ∶ ℕ → ℕ be the map defined for all k ≥ k 0 (for some k 0 ∈ ℕ large enough) by Clearly, (k) is a non-decreasing sequence such that (k) → ∞ for k → ∞ and u (k) ≤ u (k)+1 for all k ≥ k 0 . Hence, similar to the proof of Case 1, we therefore obtain from (36) that for some constant M > 0 . Thus, Using the same technique of the proof as in Case 1, one can also derive the limits Again observe that for j ≥ 0 by (36), we have u j+1 < u j when x j ∉ Ω ∶= {x ∈ H ∶ ⟨x − z 0 , x − z⟩ ≤ 0} (note that this Ω is the same set as in Lemma 2.1). Hence x (k) ∈ Ω for all k ≥ k 0 since u (k) ≤ u (k)+1 . Since {x (k) } is bounded, we may choose a subsequence (which we again call {x (k) } ) which converges weakly to some x * ∈ H . As Ω is a closed and convex set, it is then weakly closed and so x * ∈ Ω . Using (43), one can see as in Case 1 that z (k) ⇀ x * and x * ∈ S . Consequently, we have x * ∈ Ω ∩ S . In view of Lemma 2.1, however, the intersection Ω ∩ S contains z as its only element. We therefore get x * = z . Furthermore, we have since x (k) ∈ Ω . Taking lim sup in this last inequality gives

Hence
We claim that this implies lim k→∞ u (k)+1 = 0 . By definition, u (k)+1 is equal to Adding and subtracting x (k) inside the norm of the first term, and using (41), (44), we see that the first term goes to zero. The second term converges to zero also in view of (44), taking into account the boundedness of { k } . The third term vanishes in the limit because of (41) and noting once again that { k } is a bounded sequence. Finally, the last term goes to zero since { k } converges to zero and the sequence {z k } is bounded by Theorem 3.3.
We next show that we actually have lim k→∞ u k = 0 . To this end, first observe that, On the other hand, Lemma 3.4 implies that lim inf k→∞ u k ≥ 0 . Together we obtain lim k→∞ u k = 0.
Consequently, the boundedness of {z k } , assumptions on our iterative parameters and (36) show that Hence the definition of u k yields Using our assumption, it is not difficult to see that this implies the strong convergence of the entire sequence {z k } to the particular solution z. The statement therefore follows from Lemma 3.5. ◻ In the special case when B is a set-valued maximal monotone operator and A is a single-valued -inverse strongly monotone operator in problem (1), iterative procedure (9) reduces to the following: z 0 , z 1 ∈ H, with 0 < < 2 . Moreover, we obtain strong convergence for this special case of monotone inclusion for which its proof can be easily obtained by following line of arguments of previous lemmas and Theorem 3.6.

Corollary 3.7 Suppose B is a set-valued maximal monotone operator and A is a single-valued -inverse strongly monotone operator. Assume that
Then {z k } strongly converges to z, where z = P S z 0 .
We next relate our results to some existing results from the literature.

Remark 3.8
(a) In the results of Thong and Vinh (see Thong and Vinh 2019, Thm. 3.5), strong convergence for monotone inclusion was obtained under some assumptions on the iterative sequence. The monotone inclusion studied in Thong and Vinh (2019) involves sum of a set-valued maximal monotone operator and singlevalued inverse-strongly monotone operator. In this paper, our method is proposed such that no assumption is made on the iterative sequence even for a more general result considered here. (b) The Algorithm (45) could be taken as the inertial strong convergence version of some recent results in Attouch and Cabot (2019), Boţ and Csetnek (2016), Lorenz and Pock (2015) and Villa et al. (2013). ◊
We compared the algorithm (9), Algorithm 3 in Thong and Vinh (2019) and the algorithm (26) in Shehu (2016). From Fig. 2, we know that the performance of the algorithm (9) is better than that of the other two algorithms.

Example 4.3
Let us consider the following well known 1 -regularized least squares problem, which consists of finding a sparse solution to an underdetermined linear system. Suppose that we solve the following problem: where D ∈ ℝ m×n and b ∈ ℝ m . In this case, while We remark that there is a commercial software, based on the projected gradient method for solving problem (47), for example SPGL1 (van den Berg and Friedlander 2007; Lorenz 2013) and FISTA (Beck and Teboulle 2009), but this is beyond the scope of this paper. Our interest here is to demonstrate the efficiency of our proposed method (9) using problem (47). We generate random problems using different choices of for m = 100 and n = 1000 . In Algorithm 3 of Thong and Vinh (2019) and the algorithm (26) of Shehu (2016), we chose = 1 and = 1.9∕(max(eig(D T D))) , and in the algorithm (9), we chose = 0.5 . In addition, select r k = 0.2 in algorithm (26) of Shehu (2016). Table 3 shows that the algorithm (9) is better when k = 0.33 . The numerical result is described in Fig. 3, it illustrates that the performance of Algorithm (9) is better than the other two algorithms.  (a) We point out that there are different strategies in the current literature to enforce strong convergence on proximal-like algorithms (in particular, DR splitting); see, e.g., Solodov and Svaiter (2000) and Hirstoaga (2006). In this regard, the results

Final remarks
In this paper we propose a Douglas-Rachford splitting method with inertial extrapolation step and give strong convergence analysis of the method. The method is much more applicable for a general class of maximal monotone operators and no uniform Fig. 3 Comparison of three algorithms monotonicity on any of the involved maximal monotone operators is assumed. Furthermore, the analysis of the algorithm is obtained under the natural condition of the inertial factor k being monotone non-decreasing and bounded away from 1/3. Some numerical illustrations are given to test the efficiency and implemnetation of the proposed scheme. The results obtained in this paper could serve as the strong convergence counterpart of already obtained weak convergence methods for inertial Douglas-Rachford splitting methods (Bauschke and Combettes 2011;Beck and Teboulle 2009;Boţ et al. 2015;Lorenz and Pock 2015;Thong and Vinh 2019) in the literature.
Our future project include the following: • to modify the proposed method (9) in this paper so that the bound of the inertial factor k could exceed 1/3 and possibly lead to a faster convergence; and • to obtain the rate of convergence of method (9). As far as we know, this has not been obtained before in the literature.