Advertisement

Hybrid projected subgradient-proximal algorithms for solving split equilibrium problems and split common fixed point problems of nonexpansive mappings in Hilbert spaces

Open Access
Research
  • 748 Downloads

Abstract

In this paper, we propose two strongly convergent algorithms which combines diagonal subgradient method, projection method and proximal method to solve split equilibrium problems and split common fixed point problems of nonexpansive mappings in a real Hilbert space: fixed point set constrained split equilibrium problems (FPSCSEPs) in real Hilbert spaces. The computations of first algorthim requires prior knowledge of operator norm. To estimate the norm of an operator is not always easy, and if it is not easy to estimate the norm of an operator, we purpose another iterative algorithm with a way of selecting the step-sizes such that the implementation of the algorithm does not need any prior information as regards the operator norm. The strong convergence properties of the algorithms are established under mild assumptions on equilibrium bifunctions. We also report some applications and numerical results to compare and illustrate the convergence of the proposed algorithms.

Keywords

nonexpansive mappings common fixed point problem equilibrium problem split equilibrium problem monotone bifunction pseudomonotone bifunction diagonal subgradient method projected subgradient-proximal algorithm 

1 Introduction

In 1994 Censor and Elfving [1] introduced a notion of the split feasibility problem, which is to find an element of a closed convex subset of the Euclidean space whose image under a linear operator is an element of another closed convex subset of a Euclidean space. Then, in 2009 Censor and Segal [2] introduced the split common fixed point problem (SCFPP) where split feasibility problem becomes a special case of SCFPP. Many convex optimization problems in a Hilbert space can be written in the form of SCFPP and SCFPPs have played an import role in the study of several unrelated problems arising in physics, finance, economics, network analysis, elasticity, optimization, water resources, medical images, structural analysis, image analysis and several other real-world applications (see, e.g., [3, 4]). As they have a wide range of applications SCFPPs have emerged as an interesting and fascinating research area of mathematics.

Let Δ be a nonempty closed convex subset of a real Hilbert space H equipped with the inner product \(\langle\cdot,\cdot\rangle\) and with the corresponding norm \(\|\cdot\|\) and let \(U:\Delta\rightarrow \Delta\) be an operator. We denote by \(\operatorname{Fix}U=\{x\in\Delta:Ux=x\}\) the subset of fixed points of U. We say that U is nonexpansive if \(\|U(x)-U(y)\|\leq\|x-y\|\)\(\forall x,y\in\Delta\).

Throughout the paper, unless otherwise is stated, we assume that \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces and \(A: H_{1}\rightarrow {H_{2}}\) be a nonzero bounded linear operator. Suppose C be nonempty closed convex subset of \(H_{1}\) and \(T:C\rightarrow C\) be nonexpansive operator, and D be nonempty closed convex subset of \(H_{2}\) and \(V:D\rightarrow D\) be nonexpansive operator. Given two bifunctions \(f:C\times C\rightarrow{\mathbb{R}}\) and \(g:D\times D\rightarrow {\mathbb{R}}\). The notation \(\operatorname{EP}(f,C)\) represents the following equilibrium problem: find\(x^{*}\in C\)such that\(f(x^{*},y)\geq0\)\(\forall y\in C\), and \(\operatorname{SEP}(f,C)\) represents its solution set. Many problems in physics, optimization, and economics can be reduced to find the solution of equilibrum problem \(\operatorname{EP}(f,C)\); see, e.g., [5]. In 1997, Combettes and Hirstoaga [6] introduced an iterative scheme of finding the solution of \(\operatorname{EP}(f,C)\) under the assumption that \(\operatorname{SEP}(f,C)\) is nonempty. Later on, many iterative algorithms are considered to find the element of \(\operatorname{Fix}T \cap \operatorname{SEP}(f,C)\); see [7, 8, 9, 10]. In 2013, Kazmi and Rizvi [11] considered a split equilibrium problem (SEP):
$$\mbox{find } x^{*}\in{H_{1}} \mbox{ such that } \textstyle\begin{cases} x^{*}\in C, \\ f(x^{*},y)\geq{0},\quad \forall{ y\in{C}}, \\ u^{*}=Ax^{*}\in D, \\ g(u^{*},u)\geq{0}, \quad \forall{u\in{D}}. \end{cases} $$
They introduced the iterative scheme which converges strongly to a common solution of the split equilibrium problem, the variational inequality problem and the fixed point problem for a nonexpansive mapping. Many researchers have also been proposed algorithms for finding solution point of SEP; see, for example, [12, 13, 14] and the references therein. Hieu [14] proposed an algorithm for solving SEP which combines three methods including the projection method, the proximal method and the diagonal subgradient method. Recently, Dinh, Son, and Anh [15] considered the following fixed point set-constrained split equilibrium problems (FPSCSEPs):
$$ \text{find } x^{*}\in{C} \text{ such that } \textstyle\begin{cases} x^{*}\in \operatorname{Fix}T, \\ f(x^{*},y)\geq{0},\quad \forall{ y\in{C}}, \\ u^{*}=Ax^{*}\in \operatorname{Fix}V, \\ g(u^{*},u)\geq{0},\quad \forall{u\in{D}}. \end{cases} $$
(1)
Let \(\operatorname{SFPSCSEP}(f,C,T;g,D,V)\) or simply S denotes the solution set of FPSCSEP (1). The problem (1) includes two fixed point set-constrained equilibrium problems (FPSCEPs). Consider the following fixed point set-constrained equilibrium problem (\(\operatorname{FPSCEP}(f, C,T)\)):
$$ \text{find } x^{*}\in{C} \text{ such that } \textstyle\begin{cases} x^{*}\in \operatorname{Fix}T, \\ f(x^{*},y)\geq{0},\quad \forall{ y\in{C}}, \end{cases} $$
(2)
and let \(\operatorname{SFPSCEP}(f,C,T)\) or simply \({S_{1}}\) denotes its solution set. Similarly, let \(\operatorname{FPSCEP}(g, D,V)\) denote the fixed point set-constrained equilibrium problem
$$ \text{find } u^{*}\in{D} \text{ such that } \textstyle\begin{cases} u^{*}\in \operatorname{Fix}V, \\ g(u^{*},u)\geq{0},\quad \forall{u\in{D}}, \end{cases} $$
(3)
and \(\operatorname{SFPSCEP}(g,D,V)\) or simply \(S_{2}\) denotes its solution set. Therefore, from (1), (2), and (3) we have \(S=\{x^{*}\in S_{1}:Ax^{*}\in S_{2}\}\). Moreover, \(S_{1}=\{ x^{*}\in C: x^{*}\in \operatorname{SEP}(f,C)\cap \operatorname{Fix}T\}\). Similarly, \(S_{2}=\{u^{*}\in D:u^{*}\in \operatorname{SEP}(g,D)\cap \operatorname{Fix}V\}\). In [15], Dinh, Son, and Anh proposed the extragradient algorithms for finding a solution of the problem (FPSCSEP). Under certain conditions on parameters, the proposed iteration sequences are proved to be weakly and strongly convergent to a solution of (FPSCSEP). Furthermore, Dinh, Son, Jiao and Kim [16] proposed the linesearch algorithm which combines the extragradient method incorporated with the Armijo linesearch rule for solving the problem (FPSCSEP) in real Hilbert spaces under the assumptions that the first bifunction is pseudomonotone with respect to its solution set, the second bifunction is monotone, and fixed point mappings are nonexpansive. For obtaining a strong convergence result, they combined the proposed algorithm with hybrid cutting technique. The main advantages of the two mentioned extragradient methods are that they can be worked with pseudomonotone bifunctions and also the subproblems can be numerically solved more easily than subproblems in the proximal method. However, the problems of solving strongly convex optimization subproblems and of finding shrinking projections in [15, 16] is expensive excepts special cases when the feasible set has a simple structure.

In this paper, we propose two strongly convergent algorithms for finding a solution of the problem (FPSCSEP). In the first algorithm, two projections on feasible set and a projected subgradient step followed by a proximal step is need to be computed per each iteration. In the second algorithm, we propose a modification of the first algorithm where the second projection is performed on feasible set while the first projection over C is replaced by a projection onto a tangent plane to C in order to reduce the number of optimization subproblems to be solved. Moreover, in the second algorithm, a way of selecting an adaptive step-size in the second projection has allowed us to avoid the prior knowledge of operator norm. Comparing with the algorithms in [15, 16], the proposed algorithms has a simple structure, and the metric projection, in general, is simpler than solving strongly convex optimization subproblems on a same feasible set and finding shrinking projections.

The paper is organized as follows. In the next section we describe the properties and lemmas which will be used in the proof for the convergence of the proposed algorithms. The algorithms and the convergence analysis of the algorithms is presented in the third section. Finally, in the last section we will see applications supported by an example and numerical results.

2 Preliminary

To investigate the convergence of our proposed algorithm, in this section we will introduce notations, and recall properties and technical lemmas which will be used in the sequel. We write \(x_{n}\rightharpoonup{x}\) to indicate that the sequence \(\{ x_{n}\}\) converges weakly to x as \(n\rightarrow{\infty}\), and \(x_{n}\rightarrow{x}\) means that \(\{x_{n}\}\) converges strongly to x. It is well known that adjoint operator \(A^{*}\) of a bounded linear operator \(A: H_{1}\rightarrow{H_{2}}\) exists.

Let Δ be a subset of a real Hilbert space H and \(f:\Delta \times\Delta\rightarrow{\mathbb{R}}\) be a bifunction. Then f is said to be
  1. (i)
    strongly monotone on Δ, if there is \(M > 0\) (shortly M-strongly monotone on Δ) iff
    $$f(x,y)+f(y,x)\leq{-M\|y-x\|^{2}},\quad \forall{x,y\in\Delta}; $$
     
  2. (ii)
    monotone on Δ iff
    $$f(x,y)+f(y,x)\leq{0},\quad \forall{x,y\in\Delta}; $$
     
  3. (iii)
    pseudomonotone on Δ with respect to \(x\in \Delta\) iff
    $$f(x,y)\geq{0} \quad \mbox{implies}\quad f(y,x)\leq{0},\quad \forall{y\in \Delta}. $$
     

We say that f is pseudomonotone on Δ with respect to \(\Psi \subset\Delta\) if it is pseudomonotone on Δ with respect to every \(x\in\Psi\). When \(\Psi=\Delta\), f is called pseudomonotone on Δ. Clearly, \((\mathrm{i})\Rightarrow(\mathrm{ii})\Rightarrow(\mathrm{iii})\) for every \(x\in\Delta\).

Definition 2.1

Let Δ be a nonempty closed convex subset of a real Hilbert space H. The metric projection on Δ is a mapping \(P_{\Delta} : H\rightarrow{\Delta}\) defined by
$$P_{\Delta}(x) =\arg\min\bigl\{ \Vert y-x \Vert :y\in{\Delta}\bigr\} . $$

Properties

Let Δ be a nonempty closed convex subset of a real Hilbert space H and let \(P_{\Delta}\) is a metric projection on Δ. Since Δ is nonempty, closed and convex, \(P_{\Delta}(x)\) exists and is unique. From the definition of \(P_{\Delta}\), it is easy to show that \(P_{\Delta}\) has the following characteristic properties.
  1. (i)
    For all \(y\in\Delta\),
    $$\bigl\Vert P_{\Delta}(x)-x \bigr\Vert \leq{ \Vert x-y \Vert }. $$
     
  2. (ii)
    For all \(x,y\in\Delta\),
    $$\bigl\Vert P_{\Delta}(x)-P_{\Delta}(y) \bigr\Vert ^{2} \leq\bigl\langle P_{\Delta }(x)-P_{\Delta}(y), x-y\bigr\rangle , \quad \forall{x,y\in{H}}. $$
     
  3. (iii)
    For all \(x\in{\Delta}\), \(y\in{H}\),
    $$\bigl\Vert x-P_{\Delta}(y) \bigr\Vert ^{2}+ \bigl\Vert P_{\Delta}(y)-y \bigr\Vert ^{2}\leq{ \Vert x-y \Vert ^{2}}. $$
     
  4. (iv)

    \(z=P_{\Delta}(x)\) if and only if \(\langle x-z, y-z\rangle\leq0\), \(\forall y\in{\Delta}\).

     

Definition 2.2

Let H be a Hilbert space and \(f:\Delta\times\Delta\rightarrow\mathbb{R}\) be a bifunction where \(f(x,\cdot)\) is convex function for each x in Δ. Then for \(\epsilon\geq0\) the ϵ-subdifferential (ϵ-diagonal subdifferential) of f at x, denoted by \(\partial_{\epsilon }f(x,\cdot)(x)\) or \(\partial_{\epsilon}f(x,x)\), is given by
$$\partial_{\epsilon}f(x,\cdot) (x)=\bigl\{ w\in H:f(x,y)-f(x,x)+\epsilon\geq \langle w,y-x\rangle, \forall y\in\Delta\bigr\} . $$

Lemma 2.1

Given\(\lambda\in[0,1]\), \(x,y\in H\)whereHis Hilbert space. Then
$$\bigl\Vert \lambda x+(1-\lambda)y \bigr\Vert ^{2}=\lambda \Vert x \Vert ^{2}+(1-\lambda) \Vert y \Vert ^{2}-\lambda(1- \lambda) \Vert x-y \Vert ^{2}. $$

Lemma 2.2

(Opial’s condition)

For any sequence\(\{x^{k}\}\)in the Hilbert spaceHwith\(x^{k}\rightharpoonup{x}\), the inequality
$$\liminf_{k\rightarrow+\infty} \bigl\Vert x^{k}-x \bigr\Vert < \liminf_{k\rightarrow +\infty} \bigl\Vert x^{k}-y \bigr\Vert $$
holds for each\(y\in H\)with\(y\neq x\).

The next lemma will be a useful tool to obtain the boundedness of the sequences generated by the algorithms and also to obtain the convergence of the whole sequence to the solution.

Lemma 2.3

If\(\{a_{k}\}_{k=0}^{\infty}\)and\(\{b_{k}\}_{k=0}^{\infty}\)are two nonnegative real sequences such that
$$a_{k+1}\leq a_{k}+b_{k},\quad \forall k\geq0 $$
with\(\sum_{k=0}^{\infty}b_{k} < \infty\), then the sequence\(\{a_{k}\}_{k=0}^{\infty}\)converges.

Lemma 2.4

Let Δ be closed and convex subset of a Hilbert spaceH. If\(U:\Delta\rightarrow\Delta\)is nonexpansive, then FixUis closed and convex.

Now, we assume that the bifunctions \(g:D\times D\rightarrow{\mathbb {R}}\) and \(f:C\times C\rightarrow{\mathbb{R}}\) satisfy the following assumptions, Condition A and Condition B, respectively.

Condition A

  1. (A1)

    \(g(u,u) = 0\), for all \(u\in D\).

     
  2. (A2)

    g is monotone on D, i.e., \(g(u, v)+g(v,u)\leq0\), for all \(u,v\in D\).

     
  3. (A3)
    For each \(u,v,w\in D\),
    $$\limsup_{\alpha\downarrow{0}}g\bigl(\alpha w+(1-\alpha)u,v\bigr)\leq g(u,v). $$
     
  4. (A4)

    \(g(u,\cdot)\) is convex and lower semicontinuous on D for each \(u\in D\).

     

Condition B

  1. (B1)

    \(f(x,x)=0\) for all \(x\in C\).

     
  2. (B2)

    f is pseudomonotone on C with respect to \(x\in \operatorname{SEP}(f,C)\), i.e., if \(x\in \operatorname{SEP}(f,C)\) then \(f(x,y)\geq0\) implies \(f(y,x)\leq0\), \(\forall y\in C\).

     
  3. (B3)
    f satisfies the following condition, called the strict paramonotonicity property:
    $$x\in \operatorname{SEP}(f,C), y\in C, \quad f(y,x)=0\Rightarrow y\in \operatorname{SEP}(f,C). $$
     
  4. (B4)

    f is jointly weakly upper semicontinuous on \(C\times C\) in the sense that, if \(x,y\in C\) and \(\{x^{k}\}, \{y^{k}\}\subset C\) converge weakly to x and y, respectively, then \(f(x^{k},y^{k})\rightarrow f(x,y)\) as \(k\rightarrow\infty\).

     
  5. (B5)

    \(f(x,\cdot)\) is convex, lower semicontinuous and subdifferentiable on C, for all \(x\in C\).

     
  6. (B6)

    If \(\{x^{k}\}\) is bounded sequence in C and \(\epsilon _{k}\rightarrow0\), then the sequence \(\{w^{k}\}\) with \(w^{k}\in \partial_{\epsilon_{k}}f(x^{k},\cdot)(x^{k})\) is bounded.

     

The following three results are from equilibrium programming in Hilbert spaces.

Lemma 2.5

([17, Lemma 2.12])

Letgsatisfies ConditionA. Then, for each\(r>0\)and\(u\in H_{2}\), there exists\(w\in D\)such that
$$g(w,v)+\frac{1}{r}\langle v-w,w-u\rangle\geq{0}, \quad \forall{v\in D}. $$

Lemma 2.6

([17, Lemma 2.12])

Letgsatisfy ConditionA. Then, for each\(r>0\)and\(u\in H_{2}\), define a mapping (called the resolvent ofg), given by
$$T^{g}_{r}(u)=\biggl\{ w\in D: g(w,v)+\frac{1}{r} \langle v-w,w-u\rangle\geq {0}, \forall{v\in D}\biggr\} . $$
Then the following holds:
  1. (i)

    \(T_{r}^{g}\)is single-valued;

     
  2. (ii)
    \(T_{r}^{g}\)is a firmly nonexpansive, i.e., for all\(u,v\in H\),
    $$\bigl\Vert T_{r}^{g}(u)-T_{r}^{g}(v) \bigr\Vert ^{2}\leq\bigl\langle T_{r}^{g}(u)-T_{r}^{g}(v), u-v\bigr\rangle ; $$
     
  3. (iii)

    \(\operatorname{Fix}(T_{r}^{g})=\operatorname{SEP}(g,D)\), where\(\operatorname{Fix}(T_{r}^{g})\)is the fixed point set of\(T_{r}^{g}\);

     
  4. (iv)

    \(\operatorname{SEP}(g,D)\)is closed and convex.

     

Lemma 2.7

([17, Lemma 2.12])

For\(r,s>0\)and\(u,v\in H_{2}\). Under the assumptions of Lemma 2.6, then
$$\bigl\Vert T_{r}^{g}(u)-T_{s}^{g}(v) \bigr\Vert \leq \Vert u-v \Vert +\frac{|s-r|}{s} \bigl\Vert T_{s}^{g}(v)-v \bigr\Vert $$

3 Main result

In this section, we propose two strongly convergent algorithms for solving FPSCSEPs (1) which combines three methods including the projection method, the proximal method and the diagonal subgradient method.

3.1 Projected subgradient-proximal algorithm

Algorithm 3.1

Initialization: Choose \(x^{0}\in C\). Take \(\{\rho _{k}\}\), \(\{\beta_{k}\}\), \(\{\epsilon_{k}\}\), \(\{r_{k}\}\), \(\{\delta _{k}\}\) and \(\{\mu_{k}\}\) such that
$$\begin{aligned}& \rho_{k}\geq\rho>0,\qquad \beta_{k}\geq0,\qquad \epsilon_{k}\geq0,\qquad r_{k}\geq r > 0,\qquad 0 < a < \delta_{k} < b < 1, \\& 0 < c\leq\mu_{k}\leq b < \frac{1}{\|A\|^{2}}, \\& \sum_{k=0}^{\infty}\frac{\beta_{k}}{\rho_{k}}=+\infty, \qquad \sum_{k=0}^{\infty}\frac{\beta_{k}\epsilon_{k}}{\rho _{k}} < + \infty, \qquad \sum_{k=0}^{\infty}\beta _{k}^{2} < +\infty. \end{aligned}$$
Step 1: Take \(w^{k}\in H_{1}\) such that \(w^{k}\in \partial_{\epsilon_{k}}f(x^{k},\cdot)(x^{k})\).
Step 2: Calculate
$$\alpha_{k}=\frac{\beta_{k}}{\eta_{k}}, \qquad \eta_{ k}=\max \bigl\{ \rho_{k}, \bigl\Vert w^{k} \bigr\Vert \bigr\} $$
and
$$y^{k}=P_{C}\bigl(x^{k}-\alpha_{k}w^{k} \bigr). $$
Step 3: Evaluate
$$t^{k}=\delta_{k}x^{k}+(1-\delta_{k})T \bigl(y^{k}\bigr). $$
Step 4: Evaluate
$$u^{k}=T_{r_{k}}^{g}\bigl(At^{k}\bigr). $$
Step 5: Evaluate
$$x^{k+1}=P_{C}\bigl(t^{k}+\mu_{k}A^{*} \bigl(V\bigl(u^{k}\bigr)-At^{k}\bigr)\bigr). $$
Step 6: Set \(k:=k+1\) and go to Step 1.

Remark 3.1

Since \(f(x,\cdot)\) is a lower semicontinuous convex function and \(C\subset \operatorname{dom}f(x,\cdot)\) for every \(x\in C\), then the \(\epsilon_{k}\)-diagonal subdifferential \(\partial _{\epsilon_{k}}f(x^{k},\cdot)(x^{k}) \neq\emptyset\) for every \(\epsilon_{k} > 0\). Moreover, \(\rho _{k}\geq\rho > 0\). Therefore, each step of the algorithm are well defined, implying that Algorithm 3.1 is well defined.

Remark 3.2

f is pseudomonotone on C with respect to \(\operatorname{SEP}(f,C)\), then under Condition B ((B1) and (B4)), the set \(\operatorname{SEP}(f,C)\) is closed and convex.

Therefore, by Lemma 2.4, Remark 3.2 and by the linearity property of the operator A the solution set S of the FPSCSEP is convex and closed. In this paper, the solution set S is assumed to be nonempty.

Lemma 3.1

Let\(\{y^{k}\}\), \(\{t^{k}\}\)and\(\{x^{k}\}\)be sequences generated by Algorithm 3.1. For\(x^{*}\in S\),
$$\bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}\leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+2 \alpha_{k}(1-\delta _{k})f\bigl(x^{k},x^{*} \bigr)-L_{k}+\xi_{k}, $$
where
$$L_{k}=(1-\delta_{k}) \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2}+\delta_{k}(1-\delta_{k}) \bigl\Vert T\bigl(y^{k}\bigr)-x^{k} \bigr\Vert ^{2} $$
and
$$\xi_{k}=2(1-\delta_{k})\frac{\beta_{k}\epsilon_{k}}{\rho _{k}}+2(1- \delta_{k})\beta_{k}^{2}. $$

Proof

Let \(x^{*}\in S\). From \(y^{k}=P_{C}(x^{k}-\frac{\beta _{k}}{\eta_{k}}w^{k})\) and \(x^{*}\in S\) we have
$$\bigl\langle x^{k}-\alpha_{k}w^{k}-y^{k},y^{k}-x^{*} \bigr\rangle \geq0, $$
implying that
$$\begin{aligned} \bigl\langle x^{*}-y^{k},x^{k}-y^{k} \bigr\rangle &\leq\alpha_{k}\bigl\langle w^{k},x^{*}-y^{k} \bigr\rangle \\ &=\alpha_{k}\bigl\langle w^{k},x^{*}-x^{k} \bigr\rangle + \alpha_{k}\bigl\langle w^{k},x^{k}-y^{k} \bigr\rangle \\ &\leq\alpha_{k}\bigl\langle w^{k},x^{*}-x^{k} \bigr\rangle + \alpha_{k} \bigl\Vert w^{k} \bigr\Vert \bigl\Vert x^{k}-y^{k} \bigr\Vert . \end{aligned}$$
(4)
But also \(x^{k}\in C\). Thus,
$$\bigl\langle x^{k}-\alpha_{k}w^{k}-y^{k},y^{k}-x^{k} \bigr\rangle \geq0, $$
and this together with (4) gives us
$$\bigl\langle x^{k}-y^{k},x^{k}-y^{k} \bigr\rangle = \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2}\leq \alpha_{k}\bigl\langle w^{k},x^{k}-y^{k} \bigr\rangle \leq\alpha_{k} \bigl\Vert w^{k} \bigr\Vert \bigl\Vert x^{k}-y^{k} \bigr\Vert . $$
That is,
$$\bigl\Vert x^{k}-y^{k} \bigr\Vert \leq \alpha_{k} \bigl\Vert w^{k} \bigr\Vert . $$
Thus,
$$\begin{aligned} \alpha_{k} \bigl\Vert w^{k} \bigr\Vert \bigl\Vert x^{k}-y^{k} \bigr\Vert &\leq\bigl(\alpha_{k} \bigl\Vert w^{k} \bigr\Vert \bigr)^{2}=\biggl( \frac{\beta_{k} \Vert w^{k} \Vert }{\eta_{k}}\biggr)^{2} \\ &=\beta _{k}^{2}\biggl(\frac{ \Vert w^{k} \Vert }{\max\{\rho_{k}, \Vert w^{k} \Vert \}} \biggr)^{2}\leq\beta _{k}^{2}. \end{aligned}$$
(5)
Since \(x^{k}\in C\) and \(w^{k}\in\partial_{\epsilon _{k}}f(x^{k},\cdot)(x^{k})\) we have
$$\begin{aligned} f\bigl(x^{k},x^{*}\bigr)+\epsilon_{k}&=f \bigl(x^{k},x^{*}\bigr)-f\bigl(x^{k},x^{k} \bigr)+\epsilon _{k} \\ &\geq\bigl\langle w^{k},x^{*}-x^{k}\bigr\rangle . \end{aligned}$$
(6)
Using the definitions of \(\alpha_{k}\) and \(\eta_{k}\) we obtain
$$ \alpha_{k}=\frac{\beta_{k}}{\eta_{k}}=\frac{\beta_{k}}{\max\{ \rho_{k},\|w^{k}\|\}}\leq \frac{\beta_{k}}{\rho_{k}}. $$
(7)
From (4)-(7) we have
$$ \bigl\langle x^{*}-y^{k},x^{k}-y^{k} \bigr\rangle \leq\alpha _{k}f\bigl(x^{k},x^{*} \bigr)+\frac{\beta_{k}\epsilon_{k}}{\rho_{k}}+\beta_{k}^{2}. $$
(8)
But
$$ 2\bigl\langle x^{*}-y^{k},x^{k}-y^{k} \bigr\rangle = \bigl\Vert y^{k}-x^{*} \bigr\Vert ^{2}+ \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2}- \bigl\Vert x^{n}-x^{*} \bigr\Vert ^{2}. $$
(9)
From (8) and (9) we have
$$ \bigl\Vert y^{k}-x^{*} \bigr\Vert ^{2}\leq \bigl\Vert x^{n}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2}+2\alpha _{k}f\bigl(x^{k},x^{*} \bigr)+2\frac{\beta_{k}\epsilon_{k}}{\rho_{k}}+2\beta _{k}^{2}. $$
(10)
Then by definition of \(t^{k}\) we have
$$\begin{aligned} \bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2} &= \bigl\Vert \delta_{k} x^{k}+(1-\delta_{k}) T \bigl(y^{k}\bigr)-x^{*} \bigr\Vert ^{2} \\ &= \bigl\Vert \delta_{k}\bigl(x^{k}-x^{*}\bigr)+ (1-\delta_{k}) \bigl(T\bigl(y^{k}\bigr)-x^{*}\bigr) \bigr\Vert ^{2} \\ &=\delta_{k} \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+(1-\delta_{k}) \bigl\Vert T\bigl(y^{k} \bigr)-x^{*} \bigr\Vert ^{2}-\delta_{k}(1- \delta_{k}) \bigl\Vert T\bigl(y^{k}\bigr)-x^{k} \bigr\Vert ^{2} \\ &=\delta _{k} \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+(1-\delta_{k}) \bigl\Vert T \bigl(y^{k}\bigr)-T\bigl(x^{*}\bigr) \bigr\Vert ^{2}-\delta_{k}(1-\delta_{k}) \bigl\Vert T \bigl(y^{k}\bigr)-x^{k} \bigr\Vert ^{2} \\ &\leq\delta_{k} \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+(1-\delta_{k}) \bigl\Vert y^{k}-x^{*} \bigr\Vert ^{2}-\delta_{k}(1-\delta_{k}) \bigl\Vert T\bigl(y^{k}\bigr)-x^{k} \bigr\Vert ^{2}, \end{aligned}$$
and this together with (10) we have
$$\begin{aligned} \bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2} \leq& \delta_{k} \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+(1-\delta_{k}) \biggl( \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2} \\ &{}+2\alpha_{k}f\bigl(x^{k},x^{*}\bigr)+2 \frac{\beta _{k}\epsilon_{k}}{\rho_{k}}+2\beta_{k}^{2}\biggr)-\delta_{k}(1- \delta _{k}) \bigl\Vert T\bigl(y^{k}\bigr)-x^{k} \bigr\Vert ^{2}. \end{aligned}$$
That is,
$$\bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}\leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+2 \alpha_{k}(1-\delta _{k})f\bigl(x^{k},x^{*} \bigr)-L_{k}+\xi_{k}, $$
where
$$L_{k}=(1-\delta_{k}) \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2}+\delta_{k}(1-\delta_{k}) \bigl\Vert T\bigl(y^{k}\bigr)-x^{k} \bigr\Vert ^{2} $$
and
$$\xi_{k}=2(1-\delta_{k})\frac{\beta_{k}\epsilon_{k}}{\rho _{k}}+2(1- \delta_{k})\beta_{k}^{2}. $$
 □

Remark 3.3

Since \(x^{*}\in \operatorname{SEP}(C,f)\) we have \(f(x^{*},x)\geq0\) for all \(x\in C\), and by pseudomonotonicity of f with respect to \(\operatorname{SEP}(C,f)\) we have \(f(x,x^{*})\leq0\) for all \(x\in C\). Thus since the sequence \(\{x^{k}\}\) is in C we have \(f(x^{k},x^{*})\leq0\). Thus, we can also have
$$ \bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}\leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}-L_{k}+\xi_{k}. $$
(11)

Lemma 3.2

Let\(\{y^{k}\}\), \(\{u^{k}\}\), and\(\{x^{k}\}\)be sequences generated by Algorithm 3.1. Let\(x^{*}\in S\). Then
$$\bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2} \leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+2(1- \delta_{k})\alpha _{k}f\bigl(x^{k},x^{*} \bigr)+\xi_{k}-K_{k}, $$
where
$$\begin{aligned} \begin{aligned} K_{k}={}&\mu_{k}\bigl(1-\mu_{k} \Vert A \Vert ^{2}\bigr) \bigl\Vert V\bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2}+\mu _{k} \bigl\Vert u^{k}-At^{k} \bigr\Vert ^{2}+(1-\delta_{k}) \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2} \\ &{}+\delta _{k}(1-\delta_{k}) \bigl\Vert T \bigl(y^{k}\bigr)-x^{k} \bigr\Vert ^{2} \end{aligned} \end{aligned}$$
and
$$\xi_{k}=2(1-\delta_{k})\frac{\beta_{k}\epsilon_{k}}{\rho _{k}}+2(1- \delta_{k})\beta_{k}^{2}. $$

Proof

Let \(x^{*}\in S\). By Lemma 2.6, we have
$$\begin{aligned} \bigl\Vert T^{g}_{r_{k}}At^{k}-Ax^{*} \bigr\Vert ^{2} &= \bigl\Vert T^{g}_{r_{k}}At^{k}-T^{g}_{r_{k}}Ax^{*} \bigr\Vert ^{2} \\ &\leq\bigl\langle T^{g}_{r_{k}}At^{k}-T^{g}_{r_{k}}Ax^{*},At^{k}-Ax^{*} \bigr\rangle \\ &=\bigl\langle T^{g}_{r_{k}}At^{k}-Ax^{*},At^{k}-Ax^{*} \bigr\rangle \\ &=\frac{1}{2} \bigl( \bigl\Vert T^{g}_{r_{k}}At^{k}-Ax^{*} \bigr\Vert ^{2}+ \bigl\Vert At^{k}-Ax^{*} \bigr\Vert ^{2}- \bigl\Vert T^{g}_{r_{k}}At^{k}-At^{k} \bigr\Vert ^{2} \bigr). \end{aligned}$$
That is,
$$ \bigl\Vert T^{g}_{r_{k}}At^{k}-Ax^{*} \bigr\Vert ^{2}\leq\frac{1}{2} \bigl( \bigl\Vert T^{g}_{r_{k}}At^{k}-Ax^{*} \bigr\Vert ^{2}+ \bigl\Vert At^{k}-Ax^{*} \bigr\Vert ^{2}- \bigl\Vert T^{g}_{r_{k}}At^{k}-At^{k} \bigr\Vert ^{2} \bigr). $$
(12)
In view of (12), we have
$$\bigl\Vert T^{g}_{r_{k}}At^{k}-Ax^{*} \bigr\Vert ^{2}\leq \bigl\Vert At^{k}-Ax^{*} \bigr\Vert ^{2}- \bigl\Vert T^{g}_{r_{k}}At^{k}-At^{k} \bigr\Vert ^{2}. $$
Thus,
$$\begin{aligned} \bigl\Vert V\bigl(u^{k}\bigr)-Ax^{*} \bigr\Vert ^{2}&= \bigl\Vert VT^{g}_{r_{k}}At^{k}-VAx^{*} \bigr\Vert ^{2} \\ &= \bigl\Vert T^{g}_{r_{k}}At^{k}-Ax^{*} \bigr\Vert ^{2} \\ &\leq \bigl\Vert At^{k}-Ax^{*} \bigr\Vert ^{2}- \bigl\Vert T^{g}_{r_{k}}At^{k}-At^{k} \bigr\Vert ^{2}, \end{aligned}$$
(13)
which gives
$$\begin{aligned} &\bigl\langle A\bigl(t^{k}-x^{*}\bigr),V \bigl(u^{k}\bigr)-At^{k}\bigr\rangle \\ &\quad =\bigl\langle A\bigl(t^{k}-x^{*}\bigr)+V \bigl(u^{k}\bigr)-At^{k}-V\bigl(u^{k} \bigr)+At^{k} ,V\bigl(u^{k}\bigr)-At^{k}\bigr\rangle \\ &\quad =\bigl\langle V\bigl(u^{k}\bigr)-Ax^{*} ,V \bigl(u^{k}\bigr)-At^{k}\bigr\rangle - \bigl\Vert V \bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2} \\ &\quad =\frac{1}{2} \bigl( \bigl\Vert V\bigl(u^{k} \bigr)-Ax^{*} \bigr\Vert ^{2}+ \bigl\Vert V \bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2}- \bigl\Vert At^{k}-Ax^{*} \bigr\Vert ^{2} \bigr)- \bigl\Vert V\bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2} \\ &\quad =\frac{1}{2} \bigl( \bigl\Vert V\bigl(u^{k} \bigr)-Ax^{*} \bigr\Vert ^{2}- \bigl\Vert V \bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2}- \bigl\Vert At^{k}-Ax^{*} \bigr\Vert ^{2} \bigr). \end{aligned}$$
Hence,
$$\begin{aligned} &\bigl\langle A\bigl(t^{k}-x^{*}\bigr),V \bigl(u^{k}\bigr)-At^{k}\bigr\rangle \\ &\quad =\frac{1}{2}\bigl( \bigl\Vert V\bigl(u^{k} \bigr)-Ax^{*} \bigr\Vert ^{2}- \bigl\Vert V \bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2}- \bigl\Vert At^{k}-Ax^{*} \bigr\Vert ^{2} \bigr). \end{aligned}$$
(14)
From (13) and (14) we have
$$ \bigl\langle A\bigl(t^{k}-x^{*}\bigr),V \bigl(u^{k}\bigr)-At^{k}\bigr\rangle \leq-\frac{1}{2} \bigl( \bigl\Vert T^{g}_{r_{k}}At^{k}-At^{k} \bigr\Vert ^{2}+ \bigl\Vert V\bigl(u^{k} \bigr)-At^{k} \bigr\Vert ^{2}\bigr). $$
(15)
Then from (13) and (15) we have
$$\begin{aligned} & \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2} \\ &\quad = \bigl\Vert P_{C}\bigl(t^{k}+ \mu_{k}A^{*}\bigl(V\bigl(u^{k}\bigr)-At^{k} \bigr)\bigr)-P_{C}\bigl(x^{*}\bigr) \bigr\Vert \\ &\quad \leq \bigl\Vert \bigl(t^{k}-x^{*}\bigr)+ \mu_{k} \bigl(V\bigl(u^{k}\bigr)-At^{k}\bigr) \bigr\Vert ^{2} \\ &\quad = \bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}+ \bigl\Vert \mu_{k} A^{*}\bigl(V \bigl(u^{k}\bigr)-At^{k}\bigr) \bigr\Vert ^{2}+2 \mu _{k}\bigl\langle t^{k}-x^{*},A^{*} \bigl(V\bigl(u^{k}\bigr)-At^{k}\bigr) \bigr\rangle \\ &\quad \leq \bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}+\mu_{k}^{2} \bigl\Vert A^{*} \bigr\Vert ^{2} \bigl\Vert \bigl(V\bigl(u^{k} \bigr)-At^{k}\bigr) \bigr\Vert ^{2}+2\mu_{k} \bigl\langle A\bigl(t^{k}-x^{*}\bigr),V\bigl(u^{k} \bigr)-At^{k}\bigr\rangle \\ &\quad \leq \bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}+\mu_{k}^{2} \bigl\Vert A^{*} \bigr\Vert ^{2} \bigl\Vert \bigl(V\bigl(u^{k} \bigr)-At^{k}\bigr) \bigr\Vert ^{2}-\mu_{k}\bigl( \bigl\Vert T^{g}_{\alpha_{k}}At^{k}-At^{k} \bigr\Vert ^{2}+ \bigl\Vert Vu^{k}-At^{k} \bigr\Vert ^{2}\bigr) \\ &\quad = \bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}-\mu_{k}\bigl(1-\mu_{k} \Vert A \Vert ^{2}\bigr) \bigl\Vert V\bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2}-\mu_{k} \bigl\Vert T^{g}_{r_{k}}At^{k}-At^{k} \bigr\Vert ^{2} \\ &\quad = \bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}-\mu_{k}\bigl(1-\mu_{k} \Vert A \Vert ^{2}\bigr) \bigl\Vert V\bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2}-\mu_{k} \bigl\Vert u^{k}-At^{k} \bigr\Vert ^{2} \\ &\quad = \bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}-\mu_{k}\bigl(1-\mu_{k} \Vert A \Vert ^{2}\bigr) \bigl\Vert V\bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2}-\mu_{k} \bigl\Vert u^{k}-At^{k} \bigr\Vert ^{2}. \end{aligned}$$
That is,
$$ \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2} \leq \bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}-\mu_{k}\bigl(1-\mu_{k} \Vert A \Vert ^{2}\bigr) \bigl\Vert V\bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2}-\mu_{k} \bigl\Vert u^{k}-At^{k} \bigr\Vert ^{2}. $$
(16)
Therefore, from Lemma 3.1 and from (16) we have
$$\begin{aligned} \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2} \leq& \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+2 \alpha_{k}(1-\delta _{k})f\bigl(x^{k},x^{*} \bigr)-L_{k}+\xi_{k} \\ &{}-\mu_{k}\bigl(1-\mu_{k} \Vert A \Vert ^{2} \bigr) \bigl\Vert V\bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2} -\mu_{k} \bigl\Vert u^{k}-At^{k} \bigr\Vert ^{2}. \end{aligned}$$
(17)
That is,
$$ \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2} \leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+2(1-\delta_{k})\alpha _{k}f \bigl(x^{k},x^{*}\bigr)+\xi_{k}-K_{k}, $$
(18)
where
$$\begin{aligned} K_{k} =&\mu_{k}\bigl(1-\mu_{k} \Vert A \Vert ^{2}\bigr) \bigl\Vert V\bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2}+\mu _{k} \bigl\Vert u^{k}-At^{k} \bigr\Vert ^{2}+(1-\delta_{k}) \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2} \\ &{}+\delta _{k}(1-\delta_{k}) \bigl\Vert T \bigl(y^{k}\bigr)-x^{k} \bigr\Vert ^{2} \end{aligned}$$
and
$$\xi_{k}=2(1-\delta_{k})\frac{\beta_{k}\epsilon_{k}}{\rho _{k}}+2(1- \delta_{k})\beta_{k}^{2}. $$
 □

Lemma 3.3

Let\(\{y^{k}\}\), \(\{t^{k}\}\), \(\{u^{k}\}\), and\(\{x^{k}\}\)be sequences generated by Algorithm 3.1. Then:
  1. (i)

    For\(x^{*}\in S\), the limit of the sequence\(\{\|x^{k}-x^{*}\|^{2}\} \)exists (and\(\{x^{k}\}\)is bounded).

     
  2. (ii)

    \(\limsup_{k\rightarrow\infty} f(x^{k},x)=0\)for all\(x\in S\).

     
  3. (iii)
    $$\begin{aligned}& \lim_{k\rightarrow\infty} \bigl\Vert V\bigl(u^{k} \bigr)-At^{k} \bigr\Vert =\lim_{k\rightarrow \infty} \bigl\Vert u^{k}-At^{k} \bigr\Vert =0, \\& \lim_{k\rightarrow\infty} \bigl\Vert x^{k}-y^{k} \bigr\Vert =\lim_{k\rightarrow\infty } \bigl\Vert T\bigl(y^{k} \bigr)-x^{k} \bigr\Vert =0. \end{aligned}$$
     
  4. (iv)
    $$\lim_{k\rightarrow\infty} \bigl\Vert t^{k}-x^{k} \bigr\Vert =\lim_{k\rightarrow\infty } \bigl\Vert T\bigl(x^{k} \bigr)-x^{k} \bigr\Vert = \lim_{k\rightarrow\infty} \bigl\Vert V \bigl(u^{k}\bigr)-u^{k} \bigr\Vert =0. $$
     

Proof

(i) Let \(x^{*}\in S\). Since \(f(x^{k},x^{*})\leq0\), \(K_{k}\geq0\), from Lemma 3.2 we can have
$$\bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2} \leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+ \xi_{k}. $$
Observing that \(\xi_{k}=2(1-\delta_{k})\frac{\beta_{k}\epsilon_{k}}{\rho _{k}}+2(1-\delta_{k})\beta_{k}^{2}\leq2\frac{\beta_{k}\epsilon _{k}}{\rho_{k}}+2\beta_{k}^{2}\) and using the initialization condition of the parameters we can see that \(\sum_{k=0}^{\infty}\xi _{k}<\infty\).

Therefore, \(\lim_{k\rightarrow\infty} \|x^{k}-x^{*}\|^{2}\) exists and this implies that the sequence \(\{x^{k}\}\) is bounded.

(ii) From lemma 3.2 we have
$$\begin{aligned} &K_{k}+2(1-\delta_{k})\alpha_{k}\bigl[-f \bigl(x^{k},x^{*}\bigr)\bigr] \\ &\quad \leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2}+\xi_{k} \\ &\quad = \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2}+2(1-\delta_{k})\frac {\beta_{k}\epsilon_{k}}{\rho_{k}}+2(1- \delta_{k})\beta_{k}^{2} \\ &\quad \leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2}+2\frac{\beta _{k}}{\rho_{k}}\epsilon_{k}+2\beta_{k}^{2}. \end{aligned}$$
Summing up the above inequalities for every N, we obtain
$$\begin{aligned} 0&\leq\sum_{k=0}^{N}\bigl(K_{k}+2(1- \delta_{k})\alpha _{k}\bigl[-f\bigl(x^{k},x^{*} \bigr)\bigr]\bigr) \\ &\leq\sum_{k=0}^{N}\biggl( \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2}+2\frac{\beta_{k}}{\rho_{k}} \epsilon_{k}+2\beta_{k}^{2}\biggr). \end{aligned}$$
This will yield
$$\begin{aligned} 0&\leq\sum_{k=0}^{N}K_{k}+\sum _{k=0}^{N}\bigl(2(1-\delta_{k}) \alpha_{k}\bigl[-f\bigl(x^{k},x^{*}\bigr)\bigr] \bigr) \\ &\leq \bigl\Vert x^{0}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x^{N+1}-x^{*} \bigr\Vert ^{2}+2\sum _{k=0}^{N}\frac{\beta_{k}}{\rho_{k}} \epsilon_{k}+2\sum_{k=0}^{N} \beta_{k}^{2}. \end{aligned}$$
Letting \(N\rightarrow+\infty\), we have
$$0\leq\sum_{k=0}^{\infty}K_{k}+\sum _{k=0}^{\infty}\bigl(2(1-\delta _{k}) \alpha_{k}\bigl[-f\bigl(x^{k},x^{*}\bigr)\bigr] \bigr)< +\infty. $$
Hence,
$$ \sum_{k=0}^{\infty}K_{k}< + \infty $$
(19)
and
$$\sum_{k=0}^{\infty}\bigl(2(1- \delta_{k})\alpha _{k}\bigl[-f\bigl(x^{k},x^{*} \bigr)\bigr]\bigr)< +\infty. $$
Since the sequence \(\{x^{k}\}\) is bounded by Condition B(B6) the sequence \(\{w^{k}\}\) is also bounded. Thus, there is a real number \(w\geq\rho\) such that \(\|w^{k}\|\leq w\). Thus,
$$ \alpha_{k}=\frac{\beta_{k}}{\eta_{k}}=\frac{\beta_{k}}{\max\{ \rho_{k},\|w^{k}\|\}}= \frac{\beta_{k}}{\rho_{k}\max\{1,\frac{\| w^{k}\|}{\rho_{k}}\}}\geq\frac{\beta_{k}\rho}{\rho_{k}w}. $$
(20)
Noting
$$0\leq2(1-b)\sum_{k=0}^{\infty}\bigl( \alpha_{k}\bigl[-f\bigl(x^{k},x^{*}\bigr)\bigr] \bigr)\leq \sum_{k=0}^{\infty}\bigl(2(1- \delta_{k})\alpha_{k}\bigl[-f\bigl(x^{k},x^{*} \bigr)\bigr]\bigr) < +\infty, $$
we have
$$ 0\leq2(1-b)\sum_{k=0}^{\infty} \bigl(\alpha_{k}\bigl[-f\bigl(x^{k},x^{*}\bigr) \bigr]\bigr) < +\infty. $$
(21)
From (20) and (21) we have
$$0\leq2(1-b)\sum_{k=0}^{\infty}\biggl( \frac{\beta_{k}\rho}{\rho _{k}w}\bigl[-f\bigl(x^{k},x^{*}\bigr)\bigr]\biggr) \leq2(1-b)\sum_{k=0}^{\infty}\bigl(\alpha _{k}\bigl[-f\bigl(x^{k},x^{*}\bigr)\bigr]\bigr)< + \infty. $$
That is,
$$0\leq\frac{2\rho(1-b)}{w}\sum_{k=0}^{\infty} \biggl(\frac{\beta _{k}}{\rho_{k}}\bigl[-f\bigl(x^{k},x^{*}\bigr)\bigr] \biggr)< +\infty. $$
Since \(\sum_{k=0}^{\infty}\frac{\beta_{k}}{\rho_{k}}=+\infty\) and \(-f(x^{*},x^{k})\leq0\) we can conclude that
$$\limsup_{k\rightarrow\infty} f\bigl(x^{k},x\bigr)=0 $$
for all \(x\in S\).
(iii) From (19) and since \(0 < c\leq\mu_{k}\leq b < \frac{1}{\|A\|^{2}}\), \(0<\delta_{k}<1\) we have
$$\lim_{k\rightarrow\infty} \bigl\Vert V\bigl(u^{k} \bigr)-At^{k} \bigr\Vert ^{2}=\lim_{k\rightarrow\infty} \bigl\Vert u^{k}-At^{k} \bigr\Vert ^{2}=\lim _{k\rightarrow\infty } \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2}=\lim_{k\rightarrow\infty} \bigl\Vert T\bigl(y^{k} \bigr)-x^{k} \bigr\Vert ^{2}=0. $$
Hence, the result follows.
(iv) The result follows from (iii) and from the following inequalities:
$$\begin{aligned}& \bigl\Vert t^{k}-x^{k} \bigr\Vert \leq \bigl\Vert \delta_{k}x^{k}+(1-\delta_{k})T\bigl(y^{k} \bigr)-x^{k} \bigr\Vert =(1-\delta_{k}) \bigl\Vert x^{k}-T\bigl(y^{k}\bigr) \bigr\Vert \leq \bigl\Vert x^{k}-T\bigl(y^{k}\bigr) \bigr\Vert , \\& \bigl\Vert T\bigl(x^{k}\bigr)-x^{k} \bigr\Vert \leq \bigl\Vert T\bigl(x^{k}\bigr)-T\bigl(y^{k}\bigr) \bigr\Vert + \bigl\Vert x^{k}-T\bigl(y^{k}\bigr) \bigr\Vert \leq \bigl\Vert x^{k}-y^{k} \bigr\Vert + \bigl\Vert x^{k}-T\bigl(y^{k}\bigr) \bigr\Vert , \end{aligned}$$
and \(\|V(u^{k})-u^{k}\|\leq\|V(u^{k})-At^{k}\|+\|u^{k}-At^{k}\|\). □

Theorem 3.4

Assume ConditionAand ConditionBare satisfied and let\(\{y^{k}\}\), \(\{t^{k}\}\), \(\{u^{k}\}\), and\(\{x^{k}\} \), be sequences generated by Algorithm 3.1. Then the sequences\(\{ y^{k}\}\), \(\{t^{k}\}\)and\(\{x^{k}\}\)converge strongly to a point\(p\in S\)and\(\{u^{k}\}\)converge strongly to a point\(Ap\in S_{2}\). Moreover,
$$p=\lim_{k\rightarrow+\infty}P_{S}\bigl(x^{k}\bigr). $$

Proof

Let \(x^{*}\in S\). From Lemma 3.3(i) we have seen that the sequence \(\{x^{k}\}\) is bounded. There exists a subsequence \(\{ x^{k_{j}}\}\) of \(\{x^{k}\}\) such that \(x^{k_{j}}\rightharpoonup p\) as \(j\rightarrow+\infty\), where \(p\in C\) and
$$\limsup_{j\rightarrow+\infty}f\bigl(x^{k_{j}},x^{*}\bigr)= \lim_{i\rightarrow +\infty}f\bigl(x^{k_{i}},x^{*}\bigr). $$
But by the weakly upper semicontinuity of \(f(\cdot,x^{*})\) and by Lemma 3.3(ii) we have
$$f\bigl(p,x^{*}\bigr)\geq\limsup_{j\rightarrow+\infty}f \bigl(x^{k_{j}},x^{*}\bigr)=\lim_{i\rightarrow+\infty}f \bigl(x^{k_{i}},x^{*}\bigr)=\limsup_{k\rightarrow +\infty}f \bigl(x^{k},x^{*}\bigr)=0. $$
Since \(x^{*}\in S\) and \(p\in C\) we have \(f(x^{*},p)\geq0\). As f is pseudomonotone we have \(f(p,x^{*})\leq0\). Thus, this together with the above fact gives \(f(x^{*},p)=0\). Hence, by Condition B(B3) we have \(p\in \operatorname{SEP}(f,C)\).
Since
$$\bigl\langle y^{k_{j}},h \bigr\rangle = \bigl\langle y^{k_{j}}-x^{k_{j}},h \bigr\rangle + \bigl\langle x^{k_{j}},h \bigr\rangle , \quad \forall h\in H_{1}, $$
and using \(\lim_{k\rightarrow{+\infty}}\|x^{k}-y^{k}\|=0\) from Lemma 3.3 we have \(y^{k_{j}}\rightharpoonup p\) as \(j\rightarrow +\infty\). Therefore, \(Ay^{k_{j}}\rightharpoonup Ap\) as \(j\rightarrow +\infty\). Similarly, we can have \(t^{k_{j}}\rightharpoonup p\) as \(j\rightarrow+\infty\) and hence \(At^{k_{j}}\rightharpoonup Ap\) as \(j\rightarrow+\infty\).
Assume \(p\notin \operatorname{Fix}T\), that is, \(T(p)\neq p\). Thus, using Opial’s condition and Lemma 3.3
$$\begin{aligned} \liminf_{j\rightarrow+\infty} \bigl\Vert x^{k_{j}}-p \bigr\Vert &< \liminf_{j\rightarrow+\infty} \bigl\Vert x^{k_{j}}-T(p) \bigr\Vert \\ &=\liminf_{j\rightarrow+\infty} \bigl\Vert x^{k_{j}}-T \bigl(x^{k_{j}}\bigr)+T\bigl(x^{k_{j}}\bigr)-T(p) \bigr\Vert \\ &\leq\liminf_{j\rightarrow+\infty}\bigl( \bigl\Vert x^{k_{j}}-T \bigl(x^{k_{j}}\bigr) \bigr\Vert + \bigl\Vert T\bigl(x^{k_{j}} \bigr)-T(p) \bigr\Vert \bigr) \\ &=\liminf_{j\rightarrow+\infty} \bigl\Vert T\bigl(x^{k_{j}} \bigr)-T(p) \bigr\Vert \\ &\leq\liminf_{j\rightarrow+\infty} \bigl\Vert x^{k_{j}}-p \bigr\Vert , \end{aligned}$$
which is a contradiction. Hence, it must be the case that \(p\in \operatorname{Fix}T\).
Hence,
$$ p\in S_{1}. $$
(22)
Since \(\lim_{k\rightarrow{+\infty}}\|u^{k}-At^{k}\|=0\) and
$$\bigl\langle u^{k_{j}},l \bigr\rangle = \bigl\langle u^{k_{j}}-At^{k_{j}},l \bigr\rangle + \bigl\langle At^{k_{j}},l \bigr\rangle , \quad \forall l\in H_{2}, $$
we have \(u^{k_{j}}\rightharpoonup Ap\) as \(j\rightarrow+\infty\). Assume \(Ap\notin \operatorname{Fix}V\). Thus, using Opial’s condition and Lemma 3.2
$$\begin{aligned} \liminf_{j\rightarrow+\infty} \bigl\Vert u^{k_{j}}-Ap \bigr\Vert &< \liminf_{j\rightarrow+\infty} \bigl\Vert u^{k_{j}}-V(Ap) \bigr\Vert \\ &=\liminf_{j\rightarrow+\infty} \bigl\Vert u^{k_{j}}-V \bigl(u^{k_{j}}\bigr)+V\bigl(u^{k_{j}}\bigr)-V(Ap) \bigr\Vert \\ &\leq\liminf_{j\rightarrow+\infty}\bigl( \bigl\Vert u^{k_{j}}-V \bigl(u^{k_{j}}\bigr) \bigr\Vert + \bigl\Vert V\bigl(u^{k_{j}} \bigr)-V(Ap) \bigr\Vert \bigr) \\ &=\liminf_{j\rightarrow+\infty} \bigl\Vert V\bigl(u^{k_{j}} \bigr)-V(Ap) \bigr\Vert \\ &=\liminf_{j\rightarrow+\infty} \bigl\Vert u^{k_{j}}-Ap \bigr\Vert , \end{aligned}$$
which is a contradiction. Hence, it must be the case that \(Ap\in \operatorname{Fix}V\). Let \(r>0\). Assume \(Ap\notin \operatorname{Fix}(T_{r}^{g})\). Thus, \(T_{r}^{g}(Ap)\neq Ap\). Thus, using Opial’s condition, Lemma 3.2, Lemma 3.3 we obtain the following:
$$\begin{aligned} \liminf_{j\rightarrow+\infty} \bigl\Vert At^{k_{j}}-Ap \bigr\Vert &< \liminf_{j\rightarrow+\infty} \bigl\Vert At^{k_{j}}-T_{r}^{g}(Ap) \bigr\Vert \\ &=\liminf_{j\rightarrow+\infty} \bigl\Vert At^{k_{j}}-u^{k_{j}}+u^{k_{j}}-T_{r}^{g}(Ap) \bigr\Vert \\ &\leq\liminf_{j\rightarrow+\infty}\bigl( \bigl\Vert At^{k_{j}}-u^{k_{j}} \bigr\Vert + \bigl\Vert u^{k_{j}}-T_{r}^{g}(Ap) \bigr\Vert \bigr) \\ &=\liminf_{j\rightarrow+\infty} \bigl\Vert u^{k_{j}}-T_{r}^{g}(Ap) \bigr\Vert \\ &=\liminf_{j\rightarrow+\infty} \bigl\Vert T_{r_{k_{j}}}^{g} \bigl(At^{k_{j}}\bigr)-T_{r}^{g}(Ap) \bigr\Vert \\ &\leq\liminf_{j\rightarrow+\infty} \biggl( \bigl\Vert At^{k_{j}}-Ap \bigr\Vert + \frac{|r_{k_{j}}-r|}{r_{k_{j}}} \bigl\Vert T_{r}^{g} \bigl(At^{k_{j}}\bigr)-At^{k_{j}} \bigr\Vert \biggr) \\ &=\liminf_{j\rightarrow+\infty} \biggl( \bigl\Vert At^{k_{j}}-Ap \bigr\Vert + \frac {|r_{k_{j}}-r|}{r_{k_{j}}} \bigl\Vert u^{k_{j}}-At^{k_{j}} \bigr\Vert \biggr) \\ &=\liminf_{j\rightarrow+\infty} \bigl\Vert At^{k_{j}}-Ap \bigr\Vert , \end{aligned}$$
which is a contradiction. Hence, it must be the case that \(Ap\in \operatorname{Fix}(T_{r}^{g})\). By Lemma 2.6(iii) we have \(Ap\in \operatorname{SEP}(g,D)\). Therefore,
$$ Ap\in S_{2}. $$
(23)
Therefore, from (22) and (23) we have \(p\in S\). That is, \(p\in S\) and p is a weak cluster point of the sequence \(\{ x^{k}\}\). By Lemma 3.3\(\{\|x^{k}-p\|^{2}\}\) converges. Hence, we conclude that the sequence \(\{x^{k}\}\) strongly converges to p. As a result of this it is easy to see that \(t^{k}\rightarrow p\) and \(y^{k}\rightarrow p\) as \(j\rightarrow+\infty\). Moreover, \(Ay^{k}\rightarrow Ap\), \(At^{k}\rightarrow Ap\), and \(Ax^{k}\rightarrow Ap\). From
$$\bigl\Vert u^{k}-Ap \bigr\Vert \leq \bigl\Vert u^{k}-At^{k} \bigr\Vert + \bigl\Vert At^{k}-Ap \bigr\Vert $$
we have \(u^{k}\rightarrow Ap\). We will end the proof by showing \(p=\lim_{k\rightarrow+\infty}P_{S}(x^{k})\). From Lemma 3.2 we have
$$ \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2} \leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+\xi_{k}, \quad \forall x^{*}\in S. $$
(24)
Let \(z^{k}=P_{S}(x^{k})\). Since \(P_{S}(x^{k})\in S\) we have
$$ \bigl\Vert x^{k+1}-z^{k} \bigr\Vert ^{2} \leq \bigl\Vert x^{k}-z^{k} \bigr\Vert ^{2}+\xi_{k}. $$
(25)
But by property of metric projection we have
$$\bigl\Vert x^{k+1}-z^{k+1} \bigr\Vert ^{2} \leq \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2}, \quad \forall x^{*}\in S. $$
Thus,
$$ \bigl\Vert x^{k+1}-z^{k+1} \bigr\Vert ^{2} \leq \bigl\Vert x^{k+1}-z^{k} \bigr\Vert ^{2}. $$
(26)
From (25) and (26) we have
$$\bigl\Vert x^{k+1}-z^{k+1} \bigr\Vert ^{2} \leq \bigl\Vert x^{k}-z^{k} \bigr\Vert ^{2}+ \xi_{k}. $$
Since \(\sum_{k=0}^{\infty}\xi_{k}<\infty\), by Lemma 2.3 we see that \(\lim_{k\rightarrow+\infty}\|x^{k}-z^{k}\|^{2}\) exists. Using the definition of a metric projection we can have
$$ \bigl\Vert P_{S}\bigl(x^{n} \bigr)-P_{S}\bigl(x^{m}\bigr) \bigr\Vert ^{2}+ \bigl\Vert x^{m}-P_{S}\bigl(x^{m}\bigr) \bigr\Vert ^{2} \leq \bigl\Vert x^{m}-P_{S} \bigl(x^{n}\bigr) \bigr\Vert ^{2}. $$
(27)
Let \(m\geq n\). Then using (24) and (27) we have
$$\begin{aligned} \bigl\Vert z^{n}-z^{m} \bigr\Vert ^{2}&= \bigl\Vert P_{S}\bigl(x^{n}\bigr)-P_{S} \bigl(x^{m}\bigr) \bigr\Vert ^{2} \\ & \leq \bigl\Vert x^{m}-P_{S}\bigl(x^{n}\bigr) \bigr\Vert ^{2}- \bigl\Vert x^{m}-P_{S} \bigl(x^{m}\bigr) \bigr\Vert ^{2} \\ & = \bigl\Vert x^{m}-z^{n} \bigr\Vert ^{2}- \bigl\Vert x^{m}-z^{m} \bigr\Vert ^{2} \\ & \leq \bigl\Vert x^{m-1}-z^{n} \bigr\Vert ^{2}+\xi_{m-1}- \bigl\Vert x^{m}-z^{m} \bigr\Vert ^{2} \\ & \leq \bigl\Vert x^{n}-z^{n} \bigr\Vert ^{2}+ \sum_{i=n}^{m-1}\xi_{m-1}- \bigl\Vert x^{m}-z^{m} \bigr\Vert ^{2}. \end{aligned}$$
As a result of \(\sum_{k=0}^{\infty}\xi_{k}<\infty\) and \(\lim_{k\rightarrow+\infty}\|x^{k}-z^{k}\|^{2}\) exists if we let \(m,n\rightarrow+\infty\) we can see that \(\|z^{n}-z^{m}\| ^{2}\rightarrow0\). This implies the sequence \(\{z^{k}\}\) is a Cauchy sequence and hence it converges to some point z in S. Since \(z^{k}=P_{S}(x^{k})\) we have
$$\bigl\langle x^{k}-z^{k},x^{*}-z^{k} \bigr\rangle \leq0, \quad \forall x^{*}\in S. $$
Thus
$$\bigl\langle x^{k}-z^{k},p-z^{k} \bigr\rangle \leq0. $$
Thus,
$$\|z-p\|^{2}= \langle p-z,p-z \rangle=\lim_{k\rightarrow+\infty }\bigl\langle x^{k}-z^{k},p-z^{k} \bigr\rangle \leq0. $$
Hence, \(p=z\) and \(\lim_{k\rightarrow+\infty}P_{S}(x^{k})=p\). □

Let Id represents identity operator. Then, if \(T=\mathrm{Id}\) and \(V=\mathrm{Id}\), then FPSCSEP (1) is reduced to SEP. Hence, Algorithm 3.1 can be rewritten as follows.

Algorithm 3.1B

Initialization: Choose \(x^{0}\in C\). Take \(\{\rho_{k}\}\), \(\{\beta_{k}\}\), \(\{\epsilon_{k}\}\), \(\{r_{k}\}\), \(\{ \delta_{k}\}\) and \(\{\mu_{k}\}\) such that
$$\begin{aligned}& \rho_{k}\geq\rho>0, \qquad \beta _{k}\geq0, \qquad \epsilon_{k}\geq0, \qquad r_{k}\geq r > 0, \qquad 0 < a < \delta _{k} < b < 1, \\& 0 < c\leq\mu_{k}\leq b < \frac{1}{\|A\|^{2}}, \\& \sum_{k=0}^{\infty}\frac{\beta_{k}}{\rho_{k}}=+\infty, \qquad \sum_{k=0}^{\infty}\frac{\beta_{k}\epsilon_{k}}{\rho _{k}} < + \infty, \qquad \sum_{k=0}^{\infty } \beta_{k}^{2} < +\infty. \end{aligned}$$
Step 1: Take \(w^{k}\in H_{1}\) such that \(w^{k}\in \partial_{\epsilon_{k}}f(x^{k},\cdot)(x^{k})\).
Step 2: Calculate
$$\alpha_{k}=\frac{\beta_{k}}{\eta_{k}},\qquad \eta_{ k}=\max \bigl\{ \rho_{k}, \bigl\Vert w^{k} \bigr\Vert \bigr\} $$
and
$$y^{k}=P_{C}\bigl(x^{k}-\alpha_{k}w^{k} \bigr). $$
Step 3: Evaluate
$$t^{k}=\delta_{k}x^{k}+(1-\delta_{k})y^{k}. $$
Step 4: Evaluate
$$u^{k}=T_{r_{k}}^{g}\bigl(At^{k}\bigr). $$
Step 5: Evaluate
$$x^{k+1}=P_{C}\bigl(t^{k}+\mu_{k}A^{*} \bigl(u^{k}-At^{k}\bigr)\bigr). $$
Step 6: Set \(k:=k+1\) and go to Step 1.

The following corollary is an immediate consequence of Theorem 3.4.

Corollary 3.5

Let\(H_{1}\)and\(H_{2}\)be two real Hilbert spaces and\(A: H_{1}\rightarrow{H_{2}}\)be a nonzero bounded linear operator. SupposeCbe nonempty closed convex subset of\(H_{1}\), Dbe nonempty closed convex subset of\(H_{2}\), and\(f:C\times C\rightarrow{\mathbb{R}}\)and\(g:D\times D\rightarrow{\mathbb{R}}\)be bifunction. Assume ConditionAand ConditionBare satisfied and let\(\{y^{k}\}\), \(\{ t^{k}\}\), \(\{u^{k}\}\), and\(\{x^{k}\}\), be sequences generated by Algorithm 3.1B. If\(S=\{x^{*}\in \operatorname{SEP}(f,C):Ax^{*}\in \operatorname{SEP}(g,D)\}\neq \emptyset\), then sequences\(\{y^{k}\}\), \(\{t^{k}\}\)and\(\{x^{k}\}\)converge strongly to a point\(p\in S\)and\(\{u^{k}\}\)converges strongly to a point\(Ap\in \operatorname{SEP}(g,D)\).

3.2 Modified projected subgradient-proximal algorithm

The computation of Algorithm 3.1 involves the evaluation of two projections on the feasible set C and the estimated value of operator norm \(\|A\|\). It is not an easy task to calculate or at least to estimate the operator norm A. Based on Algorithm 3.1, we propose an algorithm with a way of selecting the step-sizes such that its implementation does not need any prior information as regards the operator norm, and the algorithm involves only one projection on the feasible set C.

For any \(\alpha > 0\) define \(h_{\alpha}(x)=\frac{1}{2}\| VT^{g}_{\alpha}A(x)-A(x)\|^{2}\) for all \(x\in H_{1}\), and so \(\nabla h_{\alpha}(x)=A^{*}(VT^{g}_{\alpha}A(x)-A(x))\).

Algorithm 3.2

Initialization: Choose \(x^{0}\in C\). Take \(\{\rho _{k}\}\), \(\{\beta_{k}\}\), \(\{\epsilon_{k}\}\), \(\{r_{k}\}\), \(\{\delta _{k}\}\) and \(\{\eta_{k}\}\) such that
$$\begin{aligned}& \rho_{k}\geq\rho>0,\qquad \beta_{k}\geq0,\qquad \epsilon_{k}\geq0, \qquad r_{k}=r > 0,\qquad 0 < a < \delta_{k} < b < 1, \\& 0 < \eta\leq\eta_{k}\leq4-\eta, \\& \sum_{k=0}^{\infty}\frac{\beta_{k}}{\rho_{k}}=+\infty, \qquad \sum_{k=0}^{\infty}\frac{\beta_{k}\epsilon_{k}}{\rho _{k}} < + \infty,\qquad \sum_{k=0}^{\infty}\beta _{k}^{2} < +\infty. \end{aligned}$$
Step 1: Find \(w^{k}\in H_{1}\) such that \(w^{k}\in \partial_{\epsilon_{k}}f(x^{k},\cdot)(x^{k})\).

Step 2: Evaluate \(y^{k}=P_{T_{k}}(x^{k}-\alpha _{k}w^{k})\) where \(\alpha_{k}=\frac{\beta_{k}}{\eta_{k}}\), \(\eta_{ k}:=\max\{\rho_{k},\|w^{k}\|\}\), and \(T_{0}=C\), \(T_{k}= \{z\in H_{1}:\langle t^{k-1}+\mu _{k-1}\nabla h_{r}(t^{k-1})-x^{k},z-x^{k}\rangle\leq0\} \) for \(k=1,2,3,\ldots\) .

Step 3: Evaluate \(t^{k}=\delta_{k}x^{k}+(1-\delta _{k})T(y^{k})\).

Step 4: Evaluate \(u^{k}=T_{r}^{g}(At^{k})\).

Step 5: Evaluate
$$x^{k+1}=P_{C}\bigl(t^{k}+\mu_{k}\nabla h_{r}\bigl(t^{k}\bigr)\bigr), $$
where
$$\mu_{k}= \textstyle\begin{cases} 0,& \text{if } \nabla h_{r}(t^{k})=0, \\ \frac{\eta_{k}h_{r}(t^{k})}{\|\nabla h_{r}(t^{k})\|^{2}}, & \text{otherwise}. \end{cases} $$
Step 6: Set \(k=k+1\) and go to Step 1.

Remark 3.4

By definition of \(T_{k}\), we see that \(T_{k}\) is either half-space or the whole space \(H_{1}\). Therefore, for each k, \(T_{k}\) is closed and convex set, and the computation of projection \(y^{k}=P_{T_{k}}(x^{k}-\alpha_{k}w^{k})\) in Step 2 of Algorithm 3.2 is explicit and easier than the computation of projection \(y^{k}=P_{C}(x^{k}-\alpha_{k}w^{k})\) in Step 2 of Algorithm 3.1 when C has a complex structure. Moreover, by a similar reasoning to Algorithm 3.1, Algorithm 3.2 is well defined and obviously the solution set S of the FPSCSEP is convex and closed.

Lemma 3.6

Let\(\{y^{k}\}\), \(\{t^{k}\}\)and\(\{x^{k}\}\)be sequences generated by Algorithm 3.2.
  1. (i)

    \(C\subset T_{k}\)for all\(k\geq0\).

     
  2. (ii)
    For\(x^{*}\in S\),
    $$\bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}\leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+2 \alpha_{k}(1-\delta _{k})f\bigl(x^{k},x^{*} \bigr)-L_{k}+\xi_{k}, $$
    where
    $$L_{k}=(1-\delta_{k}) \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2}+\delta_{k}(1-\delta_{k}) \bigl\Vert T\bigl(y^{k}\bigr)-x^{k} \bigr\Vert ^{2} $$
    and
    $$\xi_{k}=2(1-\delta_{k})\frac{\beta_{k}\epsilon_{k}}{\rho _{k}}+2(1- \delta_{k})\beta_{k}^{2}. $$
     

Proof

(i) From \(x^{k}=P_{C}(t^{k-1}+\mu_{k-1}\nabla h_{r}(t^{k-1}))\) and by property of metric projection we have
$$\bigl\langle t^{k-1}+\mu_{k-1}\nabla h_{r} \bigl(t^{k-1}\bigr)-x^{k},z-x^{k}\bigr\rangle , \quad \forall z\in C, $$
which together with the definition of \(T_{k}\) implies that \(C\subset T_{k}\).
(ii) Let \(x^{*}\in S\). From \(y^{k}=P_{T_{k}}(x^{k}-\frac{\beta _{k}}{\eta_{k}}w^{k})\) and \(x^{*},x^{k}\in C\subset T_{k}\) we have
$$\bigl\langle x^{k}-\alpha_{k}w^{k}-y^{k},y^{k}-x^{*} \bigr\rangle \geq0. $$
Then, with a similar proof as for Lemma 3.1 we have
$$\bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}\leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+2 \alpha_{k}(1-\delta _{k})f\bigl(x^{k},x^{*} \bigr)-L_{k}+\theta_{k}, $$
where
$$L_{k}=(1-\delta_{k}) \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2}+\delta_{k}(1-\delta_{k}) \bigl\Vert T\bigl(y^{k}\bigr)-x^{k} \bigr\Vert ^{2} $$
and
$$\xi_{k}=2(1-\delta_{k})\frac{\beta_{k}\epsilon_{k}}{\rho _{k}}+2(1- \delta_{k})\beta_{k}^{2}. $$
 □

Lemma 3.7

Let\(\{y^{k}\}\), \(\{u^{k}\}\), and\(\{x^{k}\}\)be sequences generated by Algorithm 3.2. For\(x^{*}\in S\)
$$\bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2} \leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+2(1- \delta_{k})\alpha _{k}f\bigl(x^{k},x^{*} \bigr)+\xi_{k}-K_{k}-\omega_{k}, $$
where
$$\begin{aligned}& K_{k}=(1-\delta_{k}) \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2}+\delta_{k}(1-\delta_{k}) \bigl\Vert T\bigl(y^{k}\bigr)-x^{k} \bigr\Vert ^{2}- \bigl\Vert T^{g}_{r_{k}}At^{k}-At^{k} \bigr\Vert ^{2}, \\& \xi_{k}=2(1-\delta_{k})\frac{\beta_{k}\epsilon_{k}}{\rho _{k}}+2(1- \delta_{k})\beta_{k}^{2}, \end{aligned}$$
and
$$\omega_{k}= \textstyle\begin{cases} 0,& \textit{if } \nabla h_{r}(t^{k})=0, \\ \eta_{k}(4-\eta_{k})\frac{h_{r}(t^{k})}{\|\nabla h_{r}(t^{k})\| ^{2}},&\textit{otherwise}. \end{cases} $$

Proof

Let \(x^{*}\in S\). By Lemma 2.6,
$$\begin{aligned} \bigl\Vert T^{g}_{r}At^{k}-Ax^{*} \bigr\Vert ^{2} &= \bigl\Vert T^{g}_{r}At^{k}-T^{g}_{r}Ax^{*} \bigr\Vert ^{2} \\ &\leq\bigl\langle T^{g}_{r}At^{k}-T^{g}_{r}Ax^{*},At^{k}-Ax^{*} \bigr\rangle \\ &=\bigl\langle T^{g}_{r}At^{k}-Ax^{*},At^{k}-Ax^{*} \bigr\rangle \\ &=\frac{1}{2} \bigl[ \bigl\Vert T^{g}_{r}At^{k}-Ax^{*} \bigr\Vert ^{2}+ \bigl\Vert At^{k}-Ax^{*} \bigr\Vert ^{2}- \bigl\Vert T^{g}_{r}At^{k}-At^{k} \bigr\Vert ^{2}\bigr]. \end{aligned}$$
That is,
$$ \bigl\Vert T^{g}_{r}At^{k}-Ax^{*} \bigr\Vert ^{2}\leq\frac{1}{2} \bigl( \bigl\Vert T^{g}_{r}At^{k}-Ax^{*} \bigr\Vert ^{2}+ \bigl\Vert At^{k}-Ax^{*} \bigr\Vert ^{2}- \bigl\Vert T^{g}_{r}At^{k}-At^{k} \bigr\Vert ^{2} \bigr). $$
(28)
In view at (28) we get
$$\bigl\Vert T^{g}_{r}At^{k}-Ax^{*} \bigr\Vert ^{2}\leq \bigl\Vert At^{k}-Ax^{*} \bigr\Vert ^{2}- \bigl\Vert T^{g}_{r}At^{k}-At^{k} \bigr\Vert ^{2}. $$
Hence,
$$ \bigl\Vert V\bigl(u^{k}\bigr)-Ax^{*} \bigr\Vert ^{2}\leq \bigl\Vert T^{g}_{r}At^{k}-Ax^{*} \bigr\Vert ^{2}\leq \bigl\Vert At^{k}-Ax^{*} \bigr\Vert ^{2}- \bigl\Vert T^{g}_{r}At^{k}-At^{k} \bigr\Vert ^{2}. $$
(29)
Using (29) we have
$$\begin{aligned} &\bigl\langle t^{k}-x^{*},\nabla h_{r} \bigl(t^{k}\bigr)\bigr\rangle \\ &\quad =\bigl\langle t^{k}-x^{*},A^{*}\bigl(V \bigl(u^{k}\bigr)-At^{k}\bigr)\bigr\rangle \\ &\quad =\bigl\langle A\bigl(t^{k}-x^{*}\bigr),V \bigl(u^{k}\bigr)-At^{k}\bigr\rangle \\ &\quad =\bigl\langle A\bigl(t^{k}-x^{*}\bigr)+V \bigl(u^{k}\bigr)-At^{k}-V\bigl(u^{k} \bigr)+At^{k} ,V\bigl(u^{k}\bigr)-At^{k}\bigr\rangle \\ &\quad =\bigl\langle V\bigl(u^{k}\bigr)-Ax^{*} ,V \bigl(u^{k}\bigr)-At^{k}\bigr\rangle - \bigl\Vert V \bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2} \\ &\quad =\frac{1}{2}\bigl( \bigl\Vert V\bigl(u^{k} \bigr)-Ax^{*} \bigr\Vert ^{2}+ \bigl\Vert V \bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2}- \bigl\Vert At^{k}-Ax^{*} \bigr\Vert ^{2}\bigr)- \bigl\Vert V\bigl(u^{k}\bigr)-At^{k} \bigr\Vert ^{2} \\ &\quad =\frac{1}{2} \bigl( \bigl\Vert V\bigl(u^{k} \bigr)-Ax^{*} \bigr\Vert ^{2}- \bigl\Vert V \bigl(u^{k} \bigr)-At^{k} \bigr\Vert ^{2}- \bigl\Vert At^{k}-Ax^{*} \bigr\Vert ^{2} \bigr) \\ &\quad \leq-\frac{1}{2} \bigl( \bigl\Vert T^{g}_{r}At^{k}-At^{k} \bigr\Vert ^{2}+ \bigl\Vert V\bigl(u^{k} \bigr)-At^{k} \bigr\Vert ^{2} \bigr) \\ &\quad =-\frac{1}{2} \bigl( \bigl\Vert T^{g}_{r}At^{k}-At^{k} \bigr\Vert ^{2}+2h_{r}\bigl(t^{k}\bigr) \bigr). \end{aligned}$$
That is,
$$ \bigl\langle t^{k}-x^{*},\nabla h_{r}\bigl(t^{k}\bigr)\bigr\rangle \leq-\frac{1}{2} \bigl( \bigl\Vert u^{k}-At^{k} \bigr\Vert ^{2}+2h_{r}\bigl(t^{k}\bigr)\bigr). $$
(30)
By Lemma 2.6 and (30), we have
$$\begin{aligned} \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2} &= \bigl\Vert P_{C}\bigl(t^{k}+\mu_{k}\nabla h_{r}\bigl(t^{k}\bigr)\bigr)-P_{C} \bigl(x^{*}\bigr) \bigr\Vert ^{2} \\ &\leq \bigl\Vert t^{k}+\mu_{k}\nabla h_{r} \bigl(t^{k}\bigr)-x^{*} \bigr\Vert ^{2} \\ &= \bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}+ \mu_{k}^{2} \bigl\Vert \nabla h_{r} \bigl(t^{k}\bigr) \bigr\Vert ^{2}-2\mu _{k}\bigl\langle \nabla h_{r}\bigl(t^{k}\bigr),t^{k}-x^{*} \bigr\rangle \\ &\leq \bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}+ \bigl(\mu_{k} \bigl\Vert \nabla h_{r}\bigl(t^{k} \bigr) \bigr\Vert \bigr)^{2}-4\mu _{k}h_{r} \bigl(t^{k}\bigr)- \bigl\Vert u^{k}-At^{k} \bigr\Vert ^{2} \\ &= \bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}- \bigl\Vert u^{k}-At^{k} \bigr\Vert ^{2}-\bigl[4 \mu _{k}h_{r}\bigl(t^{k}\bigr)-\bigl( \mu_{k} \bigl\Vert \nabla h_{r}\bigl(t^{k}\bigr) \bigr\Vert \bigr)^{2}\bigr]. \end{aligned}$$
That is,
$$ \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2} \leq \bigl\Vert t^{k}-x^{*} \bigr\Vert ^{2}- \bigl\Vert u^{k}-At^{k} \bigr\Vert ^{2}-\bigl[4\mu_{k}h_{r}\bigl(t^{k} \bigr)-\bigl(\mu_{k} \bigl\Vert \nabla h_{r} \bigl(t^{k}\bigr) \bigr\Vert \bigr)^{2}\bigr]. $$
(31)
Therefore, using (31) and Lemma 3.6, we have
$$\bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2} \leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+2(1- \delta_{k})\alpha _{k}f\bigl(x^{k},x^{*} \bigr)+\xi_{k}-K_{k}-\omega_{k}, $$
where
$$\begin{aligned}& K_{k}=(1-\delta_{k}) \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2}+\delta_{k}(1-\delta_{k}) \bigl\Vert T\bigl(y^{k}\bigr)-x^{k} \bigr\Vert ^{2}- \bigl\Vert u^{k}-At^{k} \bigr\Vert ^{2}, \\& \xi_{k}=2(1-\delta_{k})\frac{\beta_{k}\epsilon_{k}}{\rho _{k}}+2(1- \delta_{k})\beta_{k}^{2}, \end{aligned}$$
and
$$\omega_{k}=4\mu_{k}h_{r}\bigl(t^{k} \bigr)-\bigl(\mu_{k} \bigl\Vert \nabla h_{r} \bigl(t^{k}\bigr) \bigr\Vert \bigr)^{2}. $$
Note that by the definition of \(\mu_{k}\) we have
$$\omega_{k}= \textstyle\begin{cases} 0,& \text{if } \nabla h_{r}(t^{k})=0, \\ \eta_{k}(4-\eta_{k})\frac{h_{r}(t^{k})}{\|\nabla h_{r}(t^{k})\| ^{2}},&\text{otherwise}. \end{cases} $$
 □

Lemma 3.8

Let\(\{y^{k}\}\), \(\{t^{k}\}\), \(\{u^{k}\}\), and\(\{x^{k}\}\)be sequences generated by Algorithm 3.2. Then:
  1. (i)

    For\(x^{*}\in S\), the limit of the sequence\(\{\|x^{k}-x^{*}\|^{2}\} \)exists (and\(\{x^{k}\}\)is bounded).

     
  2. (ii)

    \(\limsup_{k\rightarrow\infty} f(x^{k},x)=0\)for all\(x\in S\).

     
  3. (iii)
    $$\begin{aligned}& \lim_{k\rightarrow\infty} \bigl\Vert u^{k}-At^{k} \bigr\Vert =\lim_{k\rightarrow \infty} \bigl\Vert x^{k}-y^{k} \bigr\Vert =\lim_{k\rightarrow\infty} \bigl\Vert T\bigl(y^{k} \bigr)-x^{k} \bigr\Vert =0, \\& \lim_{k\rightarrow\infty} \bigl\Vert t^{k}-x^{k} \bigr\Vert =\lim_{k\rightarrow\infty } \bigl\Vert T\bigl(x^{k} \bigr)-x^{k} \bigr\Vert =0. \end{aligned}$$
     
  4. (iv)
    $$\lim_{k\rightarrow\infty} h_{r}\bigl(t^{k}\bigr)=\lim _{k\rightarrow\infty} \bigl\Vert V\bigl(u^{k}\bigr)-u^{k} \bigr\Vert =0. $$
     

Proof

(i) Let \(x^{*}\in S\). Since \(f(x^{k},x^{*})\leq0\), \(K_{k}\geq0\), \(\omega_{k}\geq0\) from Lemma 3.2 we can have
$$\bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2} \leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}+ \xi_{k}. $$
Therefore, the result follows.
(ii) From Lemma 3.7 we can have
$$\begin{aligned} \omega_{k}+K_{k}+2(1-\delta_{k})\alpha _{k}\bigl[-f\bigl(x^{k},x^{*}\bigr)\bigr] &\leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2}+ \xi_{k} \\ &\leq \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2}+2 \frac{\beta _{k}}{\rho_{k}}\epsilon_{k}+2\beta_{k}^{2}. \end{aligned}$$
Summing up the above inequalities for every N, we obtain
$$\begin{aligned} 0&\leq\sum_{k=0}^{N} \bigl( \omega_{k}+K_{k}+2(1-\delta _{k}) \alpha_{k}\bigl[-f\bigl(x^{k},x^{*}\bigr)\bigr] \bigr) \\ &\leq\sum_{k=0}^{N} \biggl( \bigl\Vert x^{k}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x^{k+1}-x^{*} \bigr\Vert ^{2}+2\frac{\beta_{k}}{\rho_{k}} \epsilon_{k}+2\beta _{k}^{2} \biggr). \end{aligned}$$
This will yield
$$\begin{aligned} 0&\leq\sum_{k=0}^{N}\omega_{k}+ \sum_{k=0}^{N}K_{k}+\sum _{k=0}^{N} \bigl(2(1-\delta_{k})\alpha _{k}\bigl[-f\bigl(x^{k},x^{*}\bigr)\bigr] \bigr) \\ &\leq \bigl\Vert x^{0}-x^{*} \bigr\Vert ^{2}- \bigl\Vert x^{N+1}-x^{*} \bigr\Vert ^{2}+2\sum _{k=0}^{N}\frac{\beta_{k}}{\rho_{k}} \epsilon_{k}+2\sum_{k=0}^{N} \beta_{k}^{2}. \end{aligned}$$
Letting \(N\rightarrow+\infty\), we have
$$0\leq\sum_{k=0}^{\infty}\omega_{k}+ \sum_{k=0}^{\infty}K_{k}+\sum _{k=0}^{\infty} \bigl(2(1-\delta_{k}) \alpha_{k}\bigl[-f\bigl(x^{k},x^{*}\bigr)\bigr] \bigr)< +\infty. $$
Hence,
$$ \sum_{k=0}^{\infty} \omega_{k}< +\infty,\qquad \sum_{k=0}^{\infty}K_{k}< + \infty, \qquad \sum_{k=0}^{\infty} \bigl(2(1- \delta_{k})\alpha_{k}\bigl[-f\bigl(x^{k},x^{*} \bigr)\bigr] \bigr)< +\infty. $$
(32)
In the same way as proving Lemma 3.2 the result follows.
(iii) From \(\sum_{k=0}^{\infty}K_{k}<+\infty\) and \(0<\delta _{k}<1\) we have
$$\lim_{k\rightarrow\infty} \bigl\Vert u^{k}-At^{k} \bigr\Vert ^{2}=\lim_{k\rightarrow \infty} \bigl\Vert x^{k}-y^{k} \bigr\Vert ^{2}=\lim _{k\rightarrow\infty} \bigl\Vert T\bigl(y^{k}\bigr)-x^{k} \bigr\Vert ^{2}=0. $$
The remaining result follows from the following inequalities:
$$\bigl\Vert t^{k}-x^{k} \bigr\Vert \leq \bigl\Vert \delta_{k}x^{k}+(1-\delta_{k})T\bigl(y^{k} \bigr)-x^{k} \bigr\Vert =(1-\delta_{k}) \bigl\Vert x^{k}-T\bigl(y^{k}\bigr) \bigr\Vert \leq \bigl\Vert x^{k}-T\bigl(y^{k}\bigr) \bigr\Vert $$
and
$$\bigl\Vert T\bigl(x^{k}\bigr)-x^{k} \bigr\Vert \leq \bigl\Vert T\bigl(x^{k}\bigr)-T\bigl(y^{k}\bigr) \bigr\Vert + \bigl\Vert x^{k}-T\bigl(y^{k}\bigr) \bigr\Vert \leq \bigl\Vert x^{k}-y^{k} \bigr\Vert + \bigl\Vert x^{k}-T\bigl(y^{k}\bigr) \bigr\Vert . $$
(iv) From (32) we have \(\sum_{k=0}^{\infty} [4\mu _{k}h_{r}(t^{k})-(\mu_{k}\|\nabla h_{r}(t^{k})\|)^{2}] < +\infty\). Without loss of generality, we can assume that \(\nabla h_{r}(t^{k})\neq0\) for all k. Thus, \(\sum_{k=0}^{\infty} [4\mu_{k}h_{r}(t^{k})-(\mu_{k}\|\nabla h_{r}(t^{k})\|)^{2}] < +\infty\) implies that
$$\sum_{k=0}^{\infty}\eta_{k}(4- \eta_{k})\frac{h_{r}(t^{k})}{\| \nabla h_{r}(t^{k})\|^{2}} < +\infty. $$
Since \(0 < \eta\leq\eta_{k}\leq4-\eta\) we have
$$\sum_{k=0}^{\infty}\frac{h_{r}(t^{k})}{\|\nabla h_{r}(t^{k})\| ^{2}} < +\infty. $$
Since \(\lim_{k\rightarrow\infty}\|t^{k}-x^{k}\|=0\) and \(\{ x^{k}\}\) is bounded, \(\{t^{k}\}\) is also bounded. Thus, it follows from the Lipschitz continuity of \(\nabla h_{r}(\cdot)\) that \(\{\|\nabla h_{r}(t^{k})\|^{2}\}\) is bounded. This together with the last relation implies that \(\lim_{k\rightarrow\infty}h_{r}(t^{k})=0\). The inequality \(\|V(u^{k})-u^{k}\|\leq(2h_{r}(t^{k}))^{\frac{1}{2}}+\| u^{k}-At^{k}\|\) yields
$$\lim_{k\rightarrow\infty} \bigl\Vert V\bigl(u^{k} \bigr)-u^{k} \bigr\Vert =0. $$
 □

Theorem 3.9

Assume ConditionAand ConditionBare satisfied and let\(\{y^{k}\}\), \(\{t^{k}\}\), \(\{u^{k}\}\), and\(\{x^{k}\}\), be sequences generated by Algorithm 3.2. Then the sequences\(\{y^{k}\}\), \(\{t^{k}\}\)and\(\{ x^{k}\}\)converge strongly to a point\(p\in S\)and\(\{u^{k}\}\)converge strongly to a point\(Ap\in S_{2}\). Moreover,
$$p=\lim_{k\rightarrow+\infty}P_{S}\bigl(x^{k}\bigr). $$

Proof

With consideration of the definition of \(h_{r}(t^{k})\) the proof remains the same as for Theorem 3.4. □

For any \(\alpha > 0\) define \(h_{\alpha}(x)=\frac{1}{2}\| T^{g}_{\alpha}A(x)-A(x)\|^{2}\) for all \(x\in H_{1}\), and so \(\nabla h_{\alpha}(x)=A^{*}(T^{g}_{\alpha}A(x)-A(x))\). Setting \(T=\mathrm{Id}\) and \(V=\mathrm{Id}\), the FPSCSEP (1) is reduced to SEP. Hence, Algorithm 3.2 can be rewritten as follows:

Algorithm 3.2B

Initialization: Choose \(x^{0}\in C\). Take \(\{\rho_{k}\}\), \(\{\beta_{k}\}\), \(\{\epsilon_{k}\}\), \(\{r_{k}\}\), \(\{ \delta_{k}\}\) and \(\{\eta_{k}\}\) such that
$$\begin{aligned}& \rho_{k}\geq\rho>0, \qquad \beta_{k}\geq0,\qquad \epsilon_{k}\geq0,\qquad r_{k}=r > 0,\qquad 0 < a < \delta_{k} < b < 1, \\& 0 < \eta\leq\eta_{k}\leq4-\eta, \\& \sum_{k=0}^{\infty}\frac{\beta_{k}}{\rho_{k}}=+\infty, \qquad \sum_{k=0}^{\infty}\frac{\beta_{k}\epsilon_{k}}{\rho _{k}} < + \infty, \qquad \sum_{k=0}^{\infty}\beta _{k}^{2} < +\infty. \end{aligned}$$
Step 1: Find \(w^{k}\in H_{1}\) such that \(w^{k}\in \partial_{\epsilon_{k}}f(x^{k},\cdot)(x^{k})\).
Step 2: Evaluate \(y^{k}=P_{T_{k}}(x^{k}-\alpha _{k}w^{k})\) where \(\alpha_{k}=\frac{\beta_{k}}{\eta_{k}}\), \(\eta_{ k}:=\max\{\rho_{k},\|w^{k}\|\}\) and
$$T_{k}= \textstyle\begin{cases} C,& \text{if } k=0, \\ \{z\in H_{1}:\langle t^{k-1}+\mu_{k-1}\nabla h_{r}(t^{k-1})-x^{k},z-x^{k}\rangle\leq0\}, &\text{otherwise}. \end{cases} $$
Step 3: Evaluate \(t^{k}=\delta_{k}x^{k}+(1-\delta _{k})y^{k}\).

Step 4: Evaluate \(u^{k}=T_{r}^{g}(At^{k})\).

Step 5: Evaluate
$$x^{k+1}=P_{C}\bigl(t^{k}+\mu_{k}\nabla h_{r}\bigl(t^{k}\bigr)\bigr), $$
where
$$\mu_{k}= \textstyle\begin{cases} 0 ,& \text{if } \nabla h_{r}(t^{k})=0, \\ \frac{\eta_{k}h_{r}(t^{k})}{\|\nabla h_{r}(t^{k})\|^{2}}, & \text{otherwise}. \end{cases} $$
Step 6: Set \(k=k+1\) and go to Step 1.

The following corollary is an immediate consequence of Theorem 3.9.

Corollary 3.10

Let\(H_{1}\)and\(H_{2}\)be two real Hilbert spaces and\(A: H_{1}\rightarrow{H_{2}}\)be a nonzero bounded linear operator. SupposeCbe nonempty closed convex subset of\(H_{1}\), Dbe nonempty closed convex subset of\(H_{2}\), and\(f:C\times C\rightarrow{\mathbb{R}}\)and\(g:D\times D\rightarrow{\mathbb{R}}\)be bifunction. Assume ConditionAand ConditionBare satisfied and let\(\{y^{k}\}\), \(\{ t^{k}\}\), \(\{u^{k}\}\), and\(\{x^{k}\}\), be sequences generated by Algorithm 3.2B. If\(S=\{x^{*}\in \operatorname{SEP}(f,C):Ax^{*}\in \operatorname{SEP}(g,D)\}\neq \emptyset\), then sequences\(\{y^{k}\}\), \(\{t^{k}\}\)and\(\{x^{k}\}\)converge strongly to a point\(p\in S\)and\(\{u^{k}\}\)converges strongly to a point\(Ap\in \operatorname{SEP}(g,D)\).

4 Application and numerical result

In this section we will see some applications and we perform several numerical experiments to illustrate the computational performance of the proposed algorithms (Algorithm 3.1 and Algorithm 3.2) and we compare the convergence of one with the other.

Let \(A:H_{1}\rightarrow H_{2}\) be nonzero bounded linear operator where \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, and C and D be two nonempty closed convex subsets of \(H_{1}\) and \(H_{2}\), respectively. Let \(\psi:C\rightarrow\mathbb{R}\) and \(\phi:D\rightarrow\mathbb {R}\) be functions with ψ and ϕ are convex and lower semicontinuous, and ψ is upper semicontinuous and ϵ-subdifferentiable at every point in C. Then the following is an optimization problem:
$$ \text{find } x^{*}\in{H_{1}} \text{ such that } \textstyle\begin{cases} x^{*}\in C, \\ \psi(x^{*})\leq\psi(y),\quad \forall{ y\in{C}}, \\ u^{*}=Ax^{*}\in D, \\ \phi(u^{*})\leq\phi(v),\quad \forall {v\in{D}}. \end{cases} $$
(33)
Set \(f(x,y)=\psi(y)-\psi(x)\) and \(g(u,v)=\phi(v)-\phi(u)\). Thus, g satisfies Condition A and f satisfies Condition B as a result of the given conditions satisfied by ψ and ϕ. Therefore, optimization problem (33) is SEP which is particular case of FPSCSEP, and Algorithm 3.1B and Algorithm 3.2B solves (33).
Let H be real Hilbert spaces, and C be nonempty closed convex subset of H. Let \(\psi:C\rightarrow\mathbb{R}\) and \(\phi :C\rightarrow\mathbb{R}\) be functions with ψ and ϕ are convex, lower semicontinuous, upper semicontinuous and ϵ-subdifferentiable at every point in C. The following is a multi-objective optimization problem:
$$ \begin{aligned} &\min\bigl\{ \psi(x),\phi(x)\bigr\} \\ &\quad \mbox{s.t. } x\in C. \end{aligned} $$
(34)
Therefore, multi-objective optimization problem (34) is equilibrium problem which is also a particular case of FPSCSEP. Next we will see simple case optimization problem and its numerical result as an application. The algorithms are coded in Matlab R2017a (9.2.0.556344) and are operated on MacBook 1.1 GHz Intel Core m3 8 GB 1867 MHz LPDDR3.

Example 4.1

Consider the fixed point constrained optimization problem
$$\text{find } x^{*}\in{C} \text{ such that } \textstyle\begin{cases} x^{*}\in \operatorname{Fix}T, \\ \psi(x^{*})\leq\psi(y),\quad \forall{ y\in{C}}, \\ u^{*}=Ax^{*}\in \operatorname{Fix}V, \\ \phi(u^{*})\leq\phi(v),\quad \forall{v\in{D}}, \end{cases} $$
where \(\mathbb{R}=H_{1}\), \(\mathbb{R}^{2}=H_{2}\), \(A:H_{1}\rightarrow H_{2}\) given by \(A(x)=(-\frac{x}{2},\frac{x}{2})\), \(C=\{x\in\mathbb {R}:x\geq1\}\), \(D=\{(u_{1},u_{2})\in\mathbb{R}^{2}:u_{2}-u_{1}\geq 1\}\), \(\psi:C\rightarrow\mathbb{R}\) given by \(\psi(x)=2x+5\), and \(\phi:D\rightarrow\mathbb{R}\) given by \(\phi(u)=\phi (u_{1},u_{2})=u_{2}-u_{1}\), and the nonexpansive mappings \(T:C\rightarrow C\) given by \(T(x)=\frac{u+1}{2}\) and \(V:D\rightarrow D\) given by \(V(u)=V(u_{1},u_{2})=(-u_{2},-u_{1})\).

Set \(f(x,y)=\psi(y)-\psi(x)=2y-2x\) and \(g(u,v)=\phi(v)-\phi (u)=(v_{2}-v_{1})-(u_{2}-u_{1})\).

It is easy to check that g and f satisfy Condition A and Condition B, respectively. It is also clear to see that \(A^{*}(u)=A^{*}(u_{1},u_{2})=-\frac{1}{2}u_{1}+\frac{1}{2}u_{2}\) and \(\|A\|=\frac{1}{2}\). Hence, \(\operatorname{Fix}T=\{1\}\), \(\operatorname{SEP}(f,C)=\{1\}\), \(\operatorname{Fix}V=\{ (u_{1},u_{2})\in D: u_{2}=-u_{1}\}\), and \(\operatorname{SEP}(g,D)=\{(u_{1},u_{2})\in D: u_{2}-u_{1}=1\}\). Therefore, \(\operatorname{SFPSCEP}(f,C,T)=\{1\}\) and \(\operatorname{SFPSCEP}(g,D,V)=\{(-\frac{1}{2},\frac{1}{2})\}\). Since \(A(1)=(-\frac {1}{2},\frac{1}{2})\), we see that the solution set of this problem is singleton set \(S=\{p\}\) where \(p={1}\).

Initialization for Algorithm3.1: Take \(\rho_{k}=1\), \(\epsilon_{k}=0\), \(\mu_{k}=\frac{1}{2}\), \(r_{k}=\frac{1}{1000}\), \(\beta_{k}=\frac{\log(k+4)}{8k+16}\) and \(\delta_{k}=\frac {3^{k+1}+100}{100(3^{k+1})}\).

Initialization for Algorithm3.2: Take \(\rho_{k}=1\), \(\epsilon_{k}=0\), \(\eta_{k}=1\), \(r_{k}=r=\frac{1}{1000}\), \(\beta_{k}=\frac{\log(k+4)}{8k+16}\) and \(\delta_{k}=\frac{3^{k+1}+100}{100(3^{k+1})}\).

Note that this choice of parameters satisfies the initialization of each of the algorithms. Choose \(x^{0}\in C\). Let \(x^{k}\), \(w^{k}\), \(y^{k}\), \(t^{k}\), x, y are in \(\mathbb{R}\), and \(u^{k}=(u_{1}^{k},u_{2}^{k})\), \(v=(v_{1},v_{2})\) in \(\mathbb{R}^{2}\). For this example Algorithm 3.1 is expressed as an iteration,
$$ \textstyle\begin{cases} y^{k}= \textstyle\begin{cases} x^{k}-\beta_{k},&\text{if } x^{k}-\beta_{k}\geq 0, \\ 1,&\text{otherwise}, \end{cases}\displaystyle \\ t^{k}=\delta_{k}x^{k}+(1-\delta_{k}) \frac{y^{k}+1}{2}, \\ u^{k}= (\frac{1}{1000}-\frac{1}{2}t^{k},- \frac{1}{1000}+\frac {1}{2}t^{k} ), \\ x^{k+1}= \textstyle\begin{cases} \frac{3t^{k}-u_{1}^{k}+u_{2}^{k}}{4},&\text{if } 3t^{k}-u_{1}^{k}+u_{2}^{k}\geq4, \\ 1,&\text{otherwise}, \end{cases}\displaystyle \end{cases} $$
(35)
and Algorithm 3.2 is expressed as an iteration,
$$ \textstyle\begin{cases} T_{0}=C, \\ T_{k}=\{z\in H_{1}:(t^{k-1}+\mu_{k-1}\nabla h_{r}(t^{k-1})-x^{k})(z-x^{k})\leq0\}\quad \text{for } k\geq1, \\ y^{k}=P_{T_{k}}(x^{k}-\beta_{k}), \\ t^{k}=\delta_{k}x^{k}+(1-\delta_{k})\frac{y^{k}+1}{2}, \\ u^{k}= (\frac{1}{1000}-\frac{1}{2}t^{k},-\frac{1}{1000}+\frac {1}{2}t^{k} ), \\ \mu_{k}= \textstyle\begin{cases} 0,& \text{if } \nabla h_{r}(t^{k})=0, \\ \frac{\eta_{k}h_{r}(t^{k})}{\|\nabla h_{r}(t^{k})\|^{2}},& \text{otherwise}, \end{cases}\displaystyle \\ x^{k+1}=P_{C}(t^{k}+\mu_{k} \frac{u_{2}^{k}-u_{1}^{k}-t^{k}}{2}). \end{cases} $$
(36)
By using Matlab, we compute the numerical experiment results of iteration (35) and (36) for their respective parameter sequence given with the same initial point \(x^{0}=100\in C\).
Let \(\{z^{k}\}\) be a sequence in C. Set \(D_{k}^{z^{k}}= D_{k}=\| z^{k}-p\|\). The convergence of the sequences \(\{D_{k}^{y^{k}}\}\), \(\{ D_{k}^{t^{k}}\}\), and \(\{D_{k}^{x^{k}}\}\) to 0 implies that \(\{y^{k}\} \), \(\{t^{k}\}\), and \(\{x^{k}\}\) converges to the solution of the problem p. Hence, from Figures 1 and 2, we see that the sequences \(\{ y_{k}\}\), \(\{t_{k}\}\), and \(\{x_{k}\}\) converge to 1, and from Figure 3, we see that \(\{u_{1}^{k}\}\) converges to \(-\frac{1}{2}\) and \(\{ u_{2}^{k}\}\) converges to \(\frac{1}{2}\) (implying that \(\{u^{k}\}\) converges to \(A(1)=(-\frac{1}{2},\frac{1}{2})\)). Moreover, for the solution control parameter values and initialization given above for iteration (35) and iteration (36), iteration (36) converge to the solution faster than iteration (35).
Figure 1

Convergence of iteration ( 35 ).

Figure 2

Convergence of iteration ( 36 ).

Figure 3

Convergence of \(\pmb{\{u^{k}\}}\) for iteration ( 35 ) and iteration ( 36 ).

5 Conclusion

We have proposed two strongly convergent algorithms using a projected subgradient-proximal method for solving a fixed point set-constrained split equilibrium problem \(\operatorname{FPSCSEP}(f,C,T;g,D,V)\) in real Hilbert spaces in which the bifunction f is pseudomonotone on C with respect to its solution set, the bifunction g is monotone on D, and T and V are nonexpansive mappings. The strong convergence of the iteration sequence generated by the algorithms to a solution of this problem are obtained. Finally, we have seen the application in solving optimization problems and numerical result to analyze and also compare the convergence speed of the algorithms for our particular example.

Notes

Acknowledgements

This research was partially supported by Naresuan University.

Authors’ contributions

The authors contributed equally and significantly in writing this article. The authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

References

  1. 1.
    Censor, Y, Elfving, T: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8(2), 221-239 (1994) MathSciNetCrossRefMATHGoogle Scholar
  2. 2.
    Censor, Y, Segal, A: The split common fixed point problem for directed operators. J. Convex Anal. 16(2), 587-600 (2009) MathSciNetMATHGoogle Scholar
  3. 3.
    Censor, Y, Elfving, T, Kopf, N, Bortfeld, T: The multiple-sets split feasibility problem and its applications for inverse problems. Inverse Probl. 21(6), 2071-2084 (2005) MathSciNetCrossRefMATHGoogle Scholar
  4. 4.
    Yukawa, M, Slavakis, K, Yamada, I: Multi-domain adaptive filtering by feasibility splitting. In: IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), pp. 3814-3817 (2010) Google Scholar
  5. 5.
    Chang, S-S, Lee, HWJ, Chan, CK: A new method for solving equilibrium problem fixed point problem and variational inequality problem with application to optimization. Nonlinear Anal., Theory Methods Appl. 70(9), 3307-3319 (2009) MathSciNetCrossRefMATHGoogle Scholar
  6. 6.
    Flåm, SD, Antipin, AS: Equilibrium programming using proximal-like algorithms. Math. Program. 78(1), 29-41 (1996) MathSciNetCrossRefGoogle Scholar
  7. 7.
    Anh, PN, Muu, LD: A hybrid subgradient algorithm for nonexpansive mappings and equilibrium problems. Optim. Lett. 8(2), 727-738 (2014) MathSciNetCrossRefMATHGoogle Scholar
  8. 8.
    Tada, A, Takahashi, W: Weak and strong convergence theorems for a nonexpansive mapping and an equilibrium problem. J. Optim. Theory Appl. 133(3), 359-370 (2007) MathSciNetCrossRefMATHGoogle Scholar
  9. 9.
    Takahashi, S, Takahashi, W: Viscosity approximation methods for equilibrium problems and fixed point problems in Hilbert spaces. J. Math. Anal. Appl. 331(1), 506-515 (2007) MathSciNetCrossRefMATHGoogle Scholar
  10. 10.
    Takahashi, S, Takahashi, W: Strong convergence theorem for a generalized equilibrium problem and a nonexpansive mapping in a Hilbert space. Nonlinear Anal., Theory Methods Appl. 69(3), 1025-1033 (2008) MathSciNetCrossRefMATHGoogle Scholar
  11. 11.
    Kazmi, KR, Rizvi, SH: Iterative approximation of a common solution of a split equilibrium problem, a variational inequality problem and a fixed point problem. J. Egypt. Math. Soc. 21(1), 44-51 (2013) MathSciNetCrossRefMATHGoogle Scholar
  12. 12.
    Quoc, TD, Dung, ML, Nguyen, VH: Extragradient algorithms extended to equilibrium problems. Optimization 57(6), 749-776 (2008) MathSciNetCrossRefMATHGoogle Scholar
  13. 13.
    Dinh, BV, Muu, LD: A projection algorithm for solving pseudomonotone equilibrium problems and its application to a class of bilevel equilibria. Optimization 64(3), 559-575 (2015) MathSciNetMATHGoogle Scholar
  14. 14.
    Hieu, DV: Two hybrid algorithms for solving split equilibrium problems. Int. J. Comput. Math. 95, 561-583 (2018) MathSciNetCrossRefGoogle Scholar
  15. 15.
    Dinh, BV, Son, DX, Anh, TV: Extragradient algorithms for split equilibrium problem and nonexpansive mapping. arXiv preprint (2015). arXiv:1508.04914
  16. 16.
    Dinh, BV, Son, DX, Jiao, L, Kim, DS: Linesearch algorithms for split equilibrium problems and nonexpansive mappings. Fixed Point Theory Appl. 2016, 27 (2016) MathSciNetCrossRefMATHGoogle Scholar
  17. 17.
    Combettes, PL, Hirstoaga, SA: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6(1), 117-136 (2005) MathSciNetMATHGoogle Scholar

Copyright information

© The Author(s) 2018

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  1. 1.Department of Mathematics, Faculty of ScienceNaresuan UniversityPhitsanulokThailand
  2. 2.Research Center for Academic Excellence in MathematicsNaresuan UniversityPhitsanulokThailand

Personalised recommendations