Convergence Results of ForwardBackward Algorithms for Sum of Monotone Operators in Banach Spaces
 493 Downloads
Abstract
It is well known that many problems in image recovery, signal processing, and machine learning can be modeled as finding zeros of the sum of maximal monotone and Lipschitz continuous monotone operators. Many papers have studied forwardbackward splitting methods for finding zeros of the sum of two monotone operators in Hilbert spaces. Most of the proposed splitting methods in the literature have been proposed for the sum of maximal monotone and inversestrongly monotone operators in Hilbert spaces. In this paper, we consider splitting methods for finding zeros of the sum of maximal monotone operators and Lipschitz continuous monotone operators in Banach spaces. We obtain weak and strong convergence results for the zeros of the sum of maximal monotone and Lipschitz continuous monotone operators in Banach spaces. Many already studied problems in the literature can be considered as special cases of this paper.
Keywords
Inclusion problem 2uniformly convex Banach space forwardbackward algorithm weak convergence strong convergenceMathematics Subject Classification
47H05 47J20 47J25 65K15 90C251 Introduction
The inclusion problem (1) contains, as special cases, convexly constrained linear inverse problem, split feasibility problem, convexly constrained minimization problem, fixed point problems, variational inequalities, Nash equilibrium problem in noncooperative games, and many more. See, for instance, [11, 15, 28, 33, 35, 36] and the references therein.
In this paper, we extend Tseng’s result [48] to a Banach space. We first prove the weak convergence of the sequence generated by our proposed method, assuming that the duality mapping is weakly sequentially continuous. This weak convergence is a generalization of Theorem 3.4 given in [48]. We next prove the strong convergence result for problem (1) under some mild assumptions and this extends Theorems 1 and 2 in [18] to Banach spaces. Finally, we apply our convergence results to the composite convex minimization problem in Banach spaces.
2 Preliminaries
In this section, we define some concepts and state few basic results that we will use in our subsequent analysis. Let \(S_E\) be the unit sphere of E, and \(B_E\) the closed unit ball of E.
if E is reflexive and strictly convex with the strictly convex dual space \(E^*\), then J is singlevalued, onetoone and onto mapping. In this case, we can define the singlevalued mapping \(J^{1}:E^*\rightarrow E\) and we have \(J^{1}=J_{*}\), where \(J_{*}\) is the normalized duality mapping on \(E^*\);
if E is uniformly smooth, then J is uniformly normtonorm continuous on each bounded subset of E.
For \(\ell _p: Jx=\Vert x\Vert _{\ell _p}^{2p}y \in \ell _q\), where \( x=(x_j)_{j\ge 1}\) and \(y=(x_jx_j^{p2})_{j\ge 1}\), \(\frac{1}{p}+\frac{1}{q}=1\).
For \(L_p: Jx=\Vert x\Vert _{L_p}^{2p}x^{p2}x \in L_q\), \(\frac{1}{p}+\frac{1}{q}=1\).
Lemma 2.1
The minimum value of the set of all \(\mu _E \ge 1\) satisfying (5) for all \(x,y\in E\) is denoted by \(\mu \) and is called the 2uniform convexity constant of E; see [5]. It is obvious that \(\mu =1\) whenever E is a Hilbert space.
Lemma 2.2
([4]). Let \(\displaystyle \frac{1}{p}+\frac{1}{q}=1,\ p,q>1\). The space E is quniformly smooth if and only if its dual \(E^*\) is puniformly convex.
Lemma 2.3
 (1)
E is 2uniformly smooth
 (2)There exists a constant \(\kappa >0\) such that \(\forall \ x,y\in E\),where \(\kappa \) is the 2uniform smoothness constant. In Hilbert spaces, \(\kappa =\frac{1}{\sqrt{2}}\).$$\begin{aligned} \Vert x+y\Vert ^2\le \Vert x\Vert ^2+2\langle y,J(x)\rangle +2\kappa ^2\Vert y\Vert ^2, \end{aligned}$$
Definition 2.4
 (a)strongly monotone with modulus \(\gamma >0\) on X ifIn this case, we say that A is \(\gamma \)strongly monotone;$$\begin{aligned} \langle AxAy, xy\rangle \ge \gamma \Vert xy\Vert ^2, \forall x,y \in X. \end{aligned}$$
 (b)monotone on X if$$\begin{aligned} \langle AxAy, xy\rangle \ge 0, \forall x,y \in X; \end{aligned}$$
 (c)
Lipschitz continuous on X if there exists a constant \( L > 0 \) such that \( \Vert Ax  Ay \Vert \le L \Vert xy \Vert \) for all \( x, y \in X \).
We give some examples of monotone operator in Banach spaces as given in [2].
Example 2.5
Let us consider another example from quantum mechanics.
Example 2.6
Example 2.7
This example gives one of the perhaps most famous example of monotone operators, viz. the pLaplacian \(\mathrm{div}(\nabla u^{p2}\nabla u): W^1_0(L_p(\Omega ))\rightarrow \Big (W^1_0(L_p(\Omega ))\Big )^*\), where \(u:\Omega \rightarrow \mathbb {R}\) is a real function defined on a domain \(\Omega \subset \mathbb {R}^n\). The pLaplacian operator is a monotone operator for \(1<p<\infty \) (in fact, it is strongly monotone for \(p \ge 2\), and strictly monotone for \(1< p < 2\)). The pLaplacian operator is an extremely important model in many topical applications and certainly played an important role in the development of the theory of monotone operators.
Definition 2.8
Lemma 2.9
 (i)$$\begin{aligned} \phi (x,y)= \phi (x,z)+ \phi (z,y)+2\langle xz, JzJy\rangle , \quad \forall x,y,z\in E. \end{aligned}$$
 (ii)$$\begin{aligned} \phi (x,y)+\phi (y,x) = 2\langle xy, JxJy\rangle , \quad \forall x,y\in E. \end{aligned}$$
Lemma 2.10
The lemma that follows is stated and proven in [3, Lemma 2.2].
Lemma 2.11
The following lemma was given in [21].
Lemma 2.12
The following property of \(\phi (.,.)\) was given in [1, Theorem 7.5] (see also [16, 17]).
Lemma 2.13
We next recall some existing results from the literature to facilitate our proof of strong convergence. The first is taken from [31].
Lemma 2.14
Lemma 2.15
 (a)
\(\{\alpha _n\}\subset [0,1],\) \( \sum _{n=1}^{\infty } \alpha _n=\infty ;\)
 (b)
\(\limsup \sigma _n \le 0\);
 (c)
\(\gamma _n \ge 0 \ (n \ge 1),\) \( \sum _{n=1}^{\infty } \gamma _n <\infty .\)
The following lemma is needed in our proof to show that the weak limit point is a solution to the inclusion problem (1).
Lemma 2.16
([6]). Let \(B:E \rightarrow 2^{E^*} \) be a maximal monotone mapping and \( A: E \rightarrow E^* \) be a Lipschitz continuous and monotone mapping. Then the mapping \(A+B\) is a maximal monotone mapping.
The following result gives an equivalence of fixed point problem and problem (1).
Lemma 2.17
Proof
. \(x_n\rightarrow x\) means that \(x_n\rightarrow x\) strongly.
. \(x_n\rightharpoonup x\) means that \(x_n\rightarrow x\) weakly.
3 Approximation Method
In this section, we propose our method and state certain conditions under which we obtain the desired convergence for our proposed methods. First, we give the conditions governing the cost function and the sequence of parameters below.
Assumption 3.1
 (a)
Let E be a real 2uniformly convex Banach space which is also uniformly smooth.
 (b)
Let \(B:E \rightarrow 2^{E^*} \) be a maximal monotone operator; \( A: E \rightarrow E^* \) a monotone and LLipschitz continuous.
 (c)
The solution set \( (A+B)^{1}(0) \) of the inclusion problem (1) is nonempty.
Throughout this paper, we assume that the duality mapping J and the resolvent \(J_{\lambda _n}^B:=(J+\lambda _nB)^{1} J\) of maximal monotone operator B are easy to compute.
Assumption 3.2
\(\mu \) is the 2uniform convexity constant of E;
\(\kappa \) is the 2uniform smoothness constant of \(E^*\);
L is the Lipschitz constant of A.
Assumption 3.2 is satisfied, e.g., for \( \lambda _n = a+\frac{n}{n+1}\Big (\frac{1}{\sqrt{2\mu }\kappa L}a\Big )\) for all \( n \ge 1\).
We now give our proposed method below.
Algorithm 3.3
Step 0 Let Assumptions 3.1 and 3.2 hold. Let \( x_1 \in E \) be a given starting point. Set \( n := 1 \).
Step 1 Compute \(y_n:=J_{\lambda _n}^BoJ^{1}(Jx_n  \lambda _nAx_n)\). If \(x_ny_n=0\): STOP.
We observe that in real Hilbert spaces, the duality mapping J becomes the identity mapping and our Algorithm 3.3 reduces to the algorithm proposed by Tseng in [48].
Note that both sequences \(\{y_n\}\) and \(\{x_n\}\) are in E. Furthermore, by Lemma 2.17, we have that if \(x_n=y_n\), then \(x_n\) is a solution of problem (1).
When \(A=0\) in Algorithm 3.3, then Algorithm 3.3 reduces to the methods proposed in [6, 20, 24, 26, 27, 28, 32, 35, 38, 39, 43]. In this case, the assumption that E is 2uniformly convex Banach space and uniformly smooth is not needed. In fact, the convergence can be obtained in reflexive Banach spaces in this case. However, we do not know if the convergence of Algorithm 3.3 can be obtained in a more general reflexive Banach space for problem (1).
 When \(B=N_C\), the normal cone for closed and convex subset C of E (\(N_C(x):=\{x^* \in E^*:\langle yx,x^*\rangle \le 0, \forall y \in C \}\)), then the inclusion problem (1) reduces to a variational inequality problem (i.e., find \(x \in C: \langle Ax,yx\rangle \ge 0,~\forall y \in C\)). It is well known that \(N_C=\partial \delta _C\), where \(\delta _C\) is the indicator function of C at x, defined by \(\delta _C(x)=0\) if \(x \in C\) and \(\delta _C(x)=+\infty \) if \(x \notin C\) and \(\partial (.)\) is the subdifferential, defined by \(\partial f(x):=\{x^* \in E^*: f(y)\ge f(x)+\langle x^*, yx\rangle ,~~\forall y \in E\}\) for a proper, lower semicontinuous convex functional f on E. Using the theorem of Rockafellar in [40, 41], \(N_C=\partial \delta _C\) is maximal monotone. Hence,This implies that$$\begin{aligned} Jz \in J (J_{\lambda _n}^B)+\lambda _n \partial \delta _C(J_{\lambda _n}^B),\quad \forall z \in E. \end{aligned}$$Therefore,$$\begin{aligned} 0\in \partial \delta _C(J_{\lambda _n}^B)+\frac{1}{\lambda _n}J (J_{\lambda _n}^B)\frac{1}{\lambda _n}Jz =\partial \left( \delta _C+\frac{1}{2\lambda _n}\Vert .\Vert ^2\frac{1}{\lambda _n}Jz \right) J_{\lambda _n}^B. \end{aligned}$$and \(y_n\) in Algorithm 3.3 reduces to$$\begin{aligned} J_{\lambda _n}^B(z)=\mathop {\mathrm{argmin}}\limits _{{y \in E}}\left\{ \delta _C(y)+\frac{1}{2\lambda _n}\Vert y\Vert ^2\frac{1}{\lambda _n}\langle y,Jz\rangle \right\} \end{aligned}$$$$\begin{aligned} y_n= \mathop {\mathrm{argmin}}\limits _{{y \in E}}\left\{ \delta _C(y)+\frac{1}{2\lambda _n}\Vert y\Vert ^2\frac{1}{\lambda _n}\langle y,Jx_n  \lambda _nAx_n\rangle \right\} . \end{aligned}$$
3.1 Convergence Analysis
In this section, we give the convergence analysis of the proposed Algorithm 3.3. First, we establish the boundedness of the sequence of iterates generated by Algorithm 3.3.
Lemma 3.4
Let Assumptions 3.1 and 3.2 hold. Assume that \(x^*\in (A+B)^{1}(0)\) and let the sequence \( \{x_n\}_{n=1}^\infty \) be generated by Algorithm 3.3. Then \(\{x_n\}\) is bounded.
Proof
Definition 3.5
The duality mapping J is weakly sequentially continuous if, for any sequence \(\{x_n\}\subset E\) such that \(x_n\rightharpoonup x\) as \(n\rightarrow \infty \), then \(Jx_n\rightharpoonup ^* Jx\) as \(n\rightarrow \infty \). It is known that the normalized duality map on \(\ell _p\) spaces, \(1< p <\infty \), is weakly sequentially continuous.
We now obtain the weak convergence result of Algorithm 3.3 in the next theorem.
Theorem 3.6
Let Assumptions 3.1 and 3.2 hold. Assume that J is weakly sequentially continuous on E and let the sequence \( \{x_n\}_{n=1}^\infty \) be generated by Algorithm 3.3. Then \(\{x_n\}\) converges weakly to \(z\in (A+B)^{1}(0)\). Moreover, \(z:=\underset{n\rightarrow \infty }{\lim }\Pi _{(A+B)^{1}(0)}(x_n)\).
Proof
It is easy to see from Algorithm 3.3 above and Lemma 2.17 that \(x_n=y_n\) if and only if \(x_n \in (A+B)^{1}(0)\). Also, we have already established that \(\Vert x_ny_n\Vert \rightarrow 0\) holds when \((A+B)^{1}(0)\ne \emptyset \). Therefore, using the \(\Vert x_ny_n\Vert \) as a measure of convergence rate, we obtain the following non asymptotic rate of convergence of our proposed Algorithm 3.3.
Theorem 3.7
Let Assumptions 3.1 and 3.2 hold. Let the sequence \( \{x_n\}_{n=1}^\infty \) be generated by Algorithm 3.3. Then \(\min _{1\le k\le n}\Vert x_ky_k\Vert =O(1/\sqrt{n})\).
Proof
Next, we propose another iterative method such that the sequence of stepsizes does not depend on the Lipschitz constant of monotone operator A in problem (1).
Algorithm 3.8
Step 0 Let Assumption 3.1 hold. Given \(\gamma >0, l \in (0,1)\) and \(\theta \in (0,\frac{1}{\sqrt{2\mu }\kappa })\). Let \( x_1 \in E \) be a given starting point. Set \( n := 1 \).
Before we establish the weak convergence analysis of Algorithm 3.8, we first show that the line search rule given in (22) is welldefined in this lemma.
Lemma 3.9
Proof
We now give a weak convergence result using Algorithm 3.8 in the next theorem.
Theorem 3.10
Let Assumptions 3.1. Assume that J is weakly sequentially continuous on E and let the sequence \( \{x_n\}_{n=1}^\infty \) be generated by Algorithm 3.8. Then \(\{x_n\}\) converges weakly to \(z\in (A+B)^{1}(0)\). Moreover, \(z:=\lim _{n\rightarrow \infty } \Pi _{(A+B)^{1}(0)}(x_n)\).
Proof
Finally, we give a modification of Algorithm 3.3 and consequently obtain the strong convergence analysis below.
Algorithm 3.11
Step 0 Let Assumptions 3.1 and 3.2 hold. Suppose that \(\{\alpha _n\}\) is a real sequence in (0,1) and let \( x_1 \in E \) be a given starting point. Set \( n := 1 \).
Step 1 Compute \(y_n:= J_{\lambda _n}^BJ^{1}(Jx_n  \lambda _nAx_n)\). If \(x_ny_n=0\): STOP.
Theorem 3.12
Let Assumptions 3.1 and 3.2 hold. Suppose that \(\lim _{n \rightarrow \infty } \alpha _n = 0 \) and \( \sum _{n = 1}^{\infty } \alpha _n = \infty \). Let the sequence \( \{x_n\}_{n=1}^\infty \) be generated by Algorithm 3.11. Then \(\{x_n\}\) converges strongly to \(z=\Pi _{(A+B)^{1}(0)}(x_1)\).
Proof
Remark 3.13
Our proposed Algorithms 3.3 and 3.11 are more applicable than the proposed methods in [10, 12, 23, 29, 30, 42, 44, 45, 46, 49] even in Hilbert spaces. The methods proposed in [12, 23, 29, 30, 42, 44, 45, 46, 49] are only applicable for solving problem (1) in the case when B is maximal monotone and A is inversestrongly monotone (cocoercive) operator in real Hilbert spaces. Our Algorithms 3.3 and 3.11 are applicable for the case when B is maximal monotone and A is monotone operator even in 2uniformly convex and uniformly smooth Banach spaces (e.g., \(L_p, 1<p\le 2\)). Our results in this paper also complement the results of [14, 22].
4 Application
Theorem 4.1
Theorem 4.2
Remark 4.3

Our result in Theorems 4.1 and 4.2 complement the results of Bredies [9, 19]. Consequently, our results in Sect. 3.1 extend the results of Bredies [9, 19] to inclusion problem (1). In particular, we do not assume boundedness of \(\{x_n\}\) (which was imposed on the results of [9, 19]) in our results. Therefore, our result improves on the results of [9, 19].

The minimization problem (35) in this section extends the problem studied in [8, 15, 34, 50] and other related papers from Hilbert spaces to Banach spaces.
5 Conclusion
The results in this paper exclude \(L_p\) spaces with \(p > 2\). Therefore, extension of the results in this paper to a more general reflexive Banach space will be desired.
How to effectively compute the duality mapping J and the resolvent of maximal monotone mapping B during implementations of our proposed algorithms will be considered further.
The numerical implementations of problem (1) arising from signal processing, image reconstruction, etc will be studied;
Other ways of implementation of the stepsizes \(\lambda _n\) to give faster convergence of the proposed methods in this paper will be given.
Notes
Acknowledgements
Open access funding provided by Institute of Science and Technology (IST Austria).
References
 1.Alber, Y.I.: Metric and Generalized Projection Operators in Banach Spaces: Properties and Applications. Theory and Applications of Nonlinear Operators of Accretive and Monotone Type, pp. 15–50, Lecture Notes in Pure and Appl. Math., 178, Dekker, New York (1996)Google Scholar
 2.Alber, Y., Ryazantseva, I.: Nonlinear ILLPosed Problems of Monotone Type. Springer, Dordrecht (2006). xiv+410 pp. ISBN: 9781402043956; 1402043953zbMATHGoogle Scholar
 3.Aoyama, K., Kohsaka, F.: Strongly relatively nonexpansive sequences generated by firmly nonexpansivelike mappings. Fixed Point Theory Appl. 2014, 95 (2014). 13 ppMathSciNetzbMATHGoogle Scholar
 4.Avetisyan, K., Djordjević, O., Pavlović, M.: Littlewood–Paley inequalities in uniformly convex and uniformly smooth Banach spaces. J. Math. Anal. Appl. 336(1), 31–43 (2007)MathSciNetzbMATHGoogle Scholar
 5.Ball, K., Carlen, E.A., Lieb, E.H.: Sharp uniform convexity and smoothness inequalities for trace norms. Invent. Math. 115(3), 463–482 (1994)MathSciNetzbMATHGoogle Scholar
 6.Barbu, V.: Nonlinear Semigroups and Differential Equations in Banach Spaces. Editura Academiei R.S.R, Bucharest (1976)zbMATHGoogle Scholar
 7.Beauzamy, B.: Introduction to Banach Spaces and Their Geometry, 2nd edn. NorthHolland Mathematics Studies, 68. Notas de Matemática [Mathematical Notes], 86. NorthHolland Publishing Co., Amsterdam (1985). xv+338 pp. ISBN: 0444878785Google Scholar
 8.Beck, A., Teboulle, M.: A fast iterative shrinkagethresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)MathSciNetzbMATHGoogle Scholar
 9.Bredies, K.: A forwardbackward splitting algorithm for the minimization of nonsmooth convex functionals in Banach space. Inverse Probl. 25(1), 015005 (2009). 20 ppMathSciNetzbMATHGoogle Scholar
 10.BriceñoArias, L.M.: Forwardpartial inverseforward splitting for solving monotone inclusions. J. Optim. Theory Appl. 166(2), 391–413 (2015)MathSciNetzbMATHGoogle Scholar
 11.Chen, G.H.G., Rockafellar, R.T.: Convergence rates in forwardbackward splitting. SIAM J. Optim. 7(2), 421–444 (1997)MathSciNetzbMATHGoogle Scholar
 12.Cho, S.Y., Qin, X., Wang, L.: Strong convergence of a splitting algorithm for treating monotone operators. Fixed Point Theory Appl. 2014, 94 (2014). 15 ppMathSciNetzbMATHGoogle Scholar
 13.Cioranescu, I.: Geometry of Banach Spaces, Duality Mappings and Nonlinear Problems. Mathematics and Its Applications, Vol. 62. Kluwer Academic Publishers Group, Dordrecht (1990). xiv+260 pp. ISBN: 0792309103zbMATHGoogle Scholar
 14.Combettes, P.L., Nguyen, Q.V.: Solving composite monotone inclusions in reflexive Banach spaces by constructing best Bregman approximations from their KuhnTucker set. J. Convex Anal. 23(2), 481–510 (2016)MathSciNetzbMATHGoogle Scholar
 15.Combettes, P., Wajs, V.R.: Signal recovery by proximal forwardbackward splitting. Multiscale Model. Simul. 4(4), 1168–1200 (2005)MathSciNetzbMATHGoogle Scholar
 16.Diestel, J.: Geometry of Banach SpacesSelected Topics. Lecture Notes in Mathematics, vol. 485. Springer, Berlin (1975)zbMATHGoogle Scholar
 17.Figiel, T.: On the moduli of convexity and smoothness. Studia Math. 56(2), 121–155 (1976)MathSciNetzbMATHGoogle Scholar
 18.Gibali, A., Thong, D.V.: Tseng type methods for solving inclusion problems and its applications. Calcolo 55(4), 55:49 (2018)MathSciNetzbMATHGoogle Scholar
 19.Guan, W.B., Song, W.: The generalized forwardbackward splitting method for the minimization of the sum of two functions in Banach spaces. Numer. Funct. Anal. Optim. 36(7), 867–886 (2015)MathSciNetzbMATHGoogle Scholar
 20.Güler, O.: On the convergence of the proximal point algorithm for convex minimization. SIAM J. Control Optim. 29, 403–419 (1991)MathSciNetzbMATHGoogle Scholar
 21.Iiduka, H., Takahashi, W.: Weak convergence of a projection algorithm for variational inequalities in a Banach space. J. Math. Anal. Appl. 339(1), 668–679 (2008)MathSciNetzbMATHGoogle Scholar
 22.Iusem, A.N., Svaiter, B.F.: Splitting methods for finding zeroes of sums of maximal monotone operators in Banach spaces. J. Nonlinear Convex Anal. 15(2), 379–397 (2014)MathSciNetzbMATHGoogle Scholar
 23.Jiao, H., Wang, F.: On an iterative method for finding a zero to the sum of two maximal monotone operators. J. Appl. Math. (2014), Art. ID 414031, 5 ppGoogle Scholar
 24.Kamimura, S., Kohsaka, F., Takahashi, W.: Weak and strong convergence theorems for maximal monotone operators in a Banach space. SetValued Anal. 12, 417–429 (2004)MathSciNetzbMATHGoogle Scholar
 25.Kamimura, S., Takahashi, W.: Strong convergence of a proximaltype algorithm in a Banach space. SIAM J. Optim. 13 (2002)(3), 938–945 (2003)MathSciNetzbMATHGoogle Scholar
 26.Kohsaka, F., Takahashi, W.: Strong convergence of an iterative sequence for maximal monotone operators in a Banach space. Abstr. Appl. Anal. 3, 239–249 (2004)MathSciNetzbMATHGoogle Scholar
 27.Lions, P.L.: Une méthode itérative de résolution d’une inéquation variationnelle. Israel J. Math. 31, 204–208 (1978)MathSciNetzbMATHGoogle Scholar
 28.Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)MathSciNetzbMATHGoogle Scholar
 29.Lin, L.J., Takahashi, W.: A general iterative method for hierarchical variational inequality problems in Hilbert spaces and applications. Positivity 16(3), 429–453 (2012)MathSciNetzbMATHGoogle Scholar
 30.López, G., MartínMárquez, V., Wang, F., Xu, H.K.: Forwardbackward splitting methods for accretive operators in Banach spaces. Abstr. Appl. Anal. (2012), Art. ID 109236, 25 ppGoogle Scholar
 31.Maingé, P.E.: Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization. SetValued Anal. 16(7–8), 899–912 (2008)MathSciNetzbMATHGoogle Scholar
 32.Martinet, B.: Régularisation d’inéquations variationnelles par approximations successives. (French) Rev. Française Informat. Recherche Oérationnelle 4 (1970), Sér. R3, 154–158Google Scholar
 33.Moudafi, A., Thera, M.: Finding a zero of the sum of two maximal monotone operators. J. Optim. Theory Appl. 94, 425–448 (1997)MathSciNetzbMATHGoogle Scholar
 34.Nguyen, T.P., Pauwels, E., Richard, E., Suter, B.W.: Extragradient method in optimization: convergence and complexity. J. Optim. Theory Appl. 176(1), 137–162 (2018)MathSciNetzbMATHGoogle Scholar
 35.Passty, G.B.: Ergodic convergence to a zero of the sum of monotone operators in Hilbert spaces. J. Math. Anal. Appl. 72, 383–390 (1979)MathSciNetzbMATHGoogle Scholar
 36.Peaceman, D.H., Rachford, H.H.: The numerical solutions of parabolic and elliptic differential equations. J. Soc. Ind. Appl. Math. 3, 28–41 (1955)MathSciNetzbMATHGoogle Scholar
 37.Peypouquet, J.: Convex Optimization in Normed Spaces. Theory, Methods and Examples. With a foreword by Hedy Attouch. Springer Briefs in Optimization. Springer, Cham (2015). xiv+124 pp. ISBN: 9783319137094; 9783319137100Google Scholar
 38.Reich, S.: A weak convergence theorem for the alternating method with Bregman distances. In: Kartsatos, A.G. (ed.) Theory and Applications of Nonlinear Operators of Accretive and Monotone Type. Lecture Notes Pure Appl. Math., vol. 178, pp. 313–318. Dekker, New York (1996)Google Scholar
 39.Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control. Optim. 14, 877–898 (1976)MathSciNetzbMATHGoogle Scholar
 40.Rockafellar, R.T.: Characterization of the subdifferentials of convex functions. Pac. J. Math. 17, 497–510 (1966)MathSciNetzbMATHGoogle Scholar
 41.Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)MathSciNetzbMATHGoogle Scholar
 42.Shehu, Y., Cai, G.: Strong convergence result of forwardbackward splitting methods for accretive operators in Banach spaces with applications. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Math. RACSAM 112(1), 71–87 (2018)MathSciNetzbMATHGoogle Scholar
 43.Solodov, M.V., Svaiter, B.F.: Forcing strong convergence of proximal point iterations in a Hilbert space. Math. Program. 87, 189–202 (2000)MathSciNetzbMATHGoogle Scholar
 44.Takahashi, S., Takahashi, W., Toyoda, M.: Strong convergence theorems for maximal monotone operators with nonlinear mappings in Hilbert spaces. J. Optim. Theory Appl. 147(1), 27–41 (2010)MathSciNetzbMATHGoogle Scholar
 45.Takahashi, W., Wong, N.C., Yao, J.C.: Two generalized strong convergence theorems of Halpern’s type in Hilbert spaces and applications. Taiwanese J. Math. 16(3), 1151–1172 (2012)MathSciNetzbMATHGoogle Scholar
 46.Takahashi, W.: Strong convergence theorems for maximal and inversestrongly monotone mappings in Hilbert spaces and applications. J. Optim. Theory Appl. 157(3), 781–802 (2013)MathSciNetzbMATHGoogle Scholar
 47.Takahashi, W.: Nonlinear Functional Analysis. Yokohama Publishers, Yokohama (2000)zbMATHGoogle Scholar
 48.Tseng, P.: A modified forwardbackward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38(2), 431–446 (2000)MathSciNetzbMATHGoogle Scholar
 49.Wang, Y., Wang, F.: Strong convergence of the forwardbackward splitting method with multiple parameters in Hilbert spaces. Optimization 67(4), 493–505 (2018)MathSciNetzbMATHGoogle Scholar
 50.Wang, Y., Xu, H.K.: Strong convergence for the proximalgradient method. J. Nonlinear Convex Anal. 15(3), 581–593 (2014)MathSciNetzbMATHGoogle Scholar
 51.Xu, H.K.: Inequalities in Banach spaces with applications. Nonlinear Anal. 16(12), 1127–1138 (1991)MathSciNetzbMATHGoogle Scholar
 52.Xu, H.K.: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. (2) 66(1), 240–256 (2002)MathSciNetzbMATHGoogle Scholar
Copyright information
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.