1 Introduction

Let Θ be a bifunction from \(K\times K\) into the set of real numbers, R, where K is a nonempty closed convex subset of a real Hilbert space H. The equilibrium problem is to find a point \(x\in K\) such that

$$ \Theta(x,y)\geq0, \quad \forall y\in K. $$
(1.1)

We denote the set of solutions of (1.1) by \(\operatorname{EP}(\Theta)\). The equilibrium problem includes the fixed point problem, the variational inequality problem, the optimization problem, the saddle point problem, the Nash equilibrium problem and so on, as its special cases [1, 2].

The generalized mixed equilibrium problem is to find a point \(x\in K\) such that

$$ \Theta(x,y)+ \langle Ax,y-x\rangle+\varphi(y)-\varphi(x)\geq0,\quad \forall y\in K, $$
(1.2)

where φ is a function on K into R and A is a nonlinear mapping from K to H. The set of solutions of a generalized mixed equilibrium problem is denoted by \(\operatorname{GMEP}(\Theta,A,\varphi)\).

If we consider \(\Theta=0\) and \(\varphi=0\) in (1.2), then we have the classical variational inequality problem which is to find a point \(x\in K\) such that

$$ \langle Ax,y-x\rangle\geq0, \quad \forall y\in K. $$
(1.3)

The solution set of (1.3) is denoted by \(\operatorname{VI}(K, A)\).

To proceed we need to recall some definitions and concepts.

Definition 1.1

Let K be a nonempty closed convex subset of a real Hilbert space H.

  1. (i)

    A mapping \(S:K\rightarrow K\) is called nonexpansive if \(\| Sx-Sy\|\leq\|x-y\|\), for all \(x,y\in K\).

  2. (ii)

    A mapping \(T:K\rightarrow K\) is called k-strict pseudo contractive mapping, if for all \(x,y\in K\) there exists a constant \(0\leq k<1\) such that

    $$ \Vert Tx-Ty\Vert ^{2}\leq \Vert x-y\Vert ^{2}+ k\bigl\Vert (I-T)x-(I-T)y\bigr\Vert ^{2},\quad \forall x,y\in K, $$
    (1.4)

    where I is the identity mapping on K.

  3. (iii)

    A mapping \(A:H\rightarrow H\) is called monotone if for each \(x,y\in H\),

    $$\langle Ax-Ay,x-y\rangle\geq0. $$
  4. (iv)

    A mapping \(A:H\rightarrow H\) is called β-inverse strongly monotone if there exists \(\beta> 0\) such that

    $$ \langle Ax-Ay,x-y\rangle\geq\beta\|Ax-Ay\|^{2},\quad \forall x,y\in H. $$
  5. (v)

    The mapping \(A:K\rightarrow H\) is L-Lipschitz continuous if there exists a positive real number L such that \(\|Ax-Ay\|\leq L\| x-y\|\) for all \(x,y\in H\). If \(0< L<1\), then the mapping A is a contraction with constant L.

Clearly a nonexpansive mapping is a 0-strict pseudo contractive mapping [3]. Note that in a Hilbert space, (1.4) is equivalent to the following inequality:

$$ \langle Tx-Ty,x-y\rangle\leq \Vert x-y\Vert ^{2}- \frac{1-k}{2}\bigl\Vert (x-y)-(Tx-Ty)\bigr\Vert ^{2}, \quad \forall x,y\in K. $$
(1.5)

We denote \(F(T)=\{x\in K : Tx=x\}\), the set of fixed points of T. It can be shown that, for a k-strict pseudo contractive mapping \(T:K\rightarrow K\), the mapping \(I-T\) is demiclosed, i.e., if \(\{x_{n} \}\) is a sequence in K with \(x_{n}\rightharpoonup q\) and \(x_{n}-Tx_{n}\rightarrow0\), then \(q\in F(T)\) (refer to [4]). The symbols ⇀ and → denote weak and strong convergence, respectively.

A set valued mapping \(Q:H\rightarrow2^{H}\) is called monotone if for all \(x,y\in H\), \(f\in Q(x)\) and \(g\in Q(y)\) imply \(\langle x-y,f-g\rangle\geq0\). A monotone mapping \(Q:H\rightarrow2^{H}\) is maximal if the graph \(G(Q)\) of Q is not properly contained in the graph of any other monotone mapping. It is well known that a monotone mapping Q is maximal if and only if for \((x,f )\in H\times H\), \(\langle x-y,f-g\rangle \geq0\) for every \((y,g)\in G(Q)\) implies \(f\in Q(x)\) [5].

For any \(x\in H\) there exists a unique point in K denoted by \(P_{K}x\) such that \(\|x-P_{K}x\|\leq\|x-y\|\) for all \(y\in K\). It is well known that the operator \(P_{K}:H\rightarrow K\), which is called the metric projection, is a nonexpansive mapping and has the properties that, for each \(x\in H\), \(P_{K}x\in K\) and \(\langle x-P_{K}x,P_{K}x-y\rangle\geq0\), for all \(y\in K\). It is also known that \(\|P_{K}x-P_{K}y\|^{2}\leq\langle x-y, P_{K}x-P_{K}y\rangle\), for all \(x,y\in K\) [6]. In the context of the variational inequality problem, we obtain

$$ q\in \operatorname{VI}(K, A)\quad \mbox{if and only if}\quad q=P_{K}(q-\lambda Aq),\quad \forall\lambda>0. $$
(1.6)

Let I be an index set. For each \(i\in I\), let \(\Theta_{i}\) be a real valued bifunction on \(K\times K\), \(A_{i}\) a nonlinear mapping, and \(\varphi_{i}:K\rightarrow R\) a function. The system of generalized mixed equilibrium problems as an extension of problems (1.1), (1.2), and (1.3) is to find a point \(x\in K\) such that

$$ \Theta_{i}(x,y)+ \langle A_{i} x,y-x\rangle+ \varphi_{i}(y)-\varphi_{i}(x)\geq 0, \quad \forall y\in K, \forall i\in I. $$
(1.7)

Note that \(\bigcap_{i\in I}\operatorname{GMEP}(\Theta_{i},A_{i},\varphi_{i})\) is the solution set of (1.7).

Vast range of problems which arise in economics, finance, image reconstruction, transportation, network and so on, appear as a special case of problem (1.7); see for example [710]. This problem also covers various forms of feasibility problems. So, it seems reasonable to study the system of generalized mixed equilibrium problems. There are many authors who introduced some iterative processes for finding the solution set of these problems or common solution of someone with others, for instance see [2, 1113] and the references therein. In 2010, Peng et al. [14] introduced the following iterative algorithm for finding a common element of fixed points of a family of infinite nonexpansive mappings and the set of solutions of a system of finite family of equilibrium problems:

$$ \left \{ \textstyle\begin{array}{l} z_{1}=z \in H, \\ u_{n}=T_{\beta_{n}}^{F_{m}}T_{\beta_{n}}^{F_{m-1}}\cdots T_{\beta_{n}}^{F_{2}}T_{\beta _{n}}^{F_{1}}z_{n}, \\ v_{n}=P_{K}(I-s_{n}A)u_{n}, \\ z_{n+1}=\alpha_{n}\gamma f(W_{n}z_{n})+(I-\alpha_{n} B)W_{n}P_{K}(I-r_{n}A)v_{n}. \end{array}\displaystyle \right . $$

Under suitable conditions, they presented and proved a strong convergence theorem for finding an element of \(\Omega=\bigcap_{i=1}^{\infty}F(T_{i})\cap \operatorname{VI}(K, A)\cap\bigcap_{k=1}^{m} \operatorname{EP}(\mathrm {F}_{k})\). In 2013, Cai and Bu [11] proposed an iterative method as follows:

$$ \left \{ \textstyle\begin{array}{l} x_{1}=x \in K, \\ z_{n}=T_{r_{M,n}}^{(F_{M},\varphi _{M})}(I-r_{M,n}B_{M})T_{r_{M-1,n}}^{(F_{M-1},\varphi _{M-1})}(I-r_{M-1,n}B_{M-1})\cdots T_{r_{1,n}}^{(F_{1},\varphi _{1})}(I-r_{1}B_{1})x_{n}, \\ u_{n}=P_{K}(I-\lambda_{N,n}A_{N})P_{K}(I-\lambda _{N-1,n}A_{N-1})\cdots P_{K}(I-\lambda_{1,n}A_{1})z_{n}, \\ x_{n+1}=\alpha_{n} f(S_{n}x_{n})+\beta_{n} x_{n}+(I-\beta_{n}-\alpha_{n})W(\tau_{n})S u_{n}. \end{array}\displaystyle \right . $$

They proved that under appropriate conditions, the sequences \(\{x_{n}\}\), \(\{z_{n}\}\), and \(\{u_{n}\}\) converge strongly to \(z=P_{\Omega}f(z)\), where \(\Omega=F(W)\cap\bigcap_{i=1}^{\infty}F(T_{i})\cap\bigcap_{k=1}^{m} \operatorname{GMEP}(\mathrm{F}_{k},\varphi_{k},B_{k}) \cap\bigcap_{j=1}^{N} \operatorname{VI}(K, A_{j})\) and f is a contractive mapping. The iterative method for solving a system of equilibrium problem has studied by many other authors; for example [7, 14, 15] and so on. Note that, for finding common fixed point of a finite family of mapping and solution set of other problems, authors usually have been using the so-called W-mapping [11, 16, 17]. For example Thianwan [16] proposed the following method for finding a common element of the set of solutions of an equilibrium problem, the set of common fixed points of a finite family of nonexpansive mappings, and the set of solutions of the variational inequality of an α-inverse strongly monotone mapping in a real Hilbert space:

$$ \left \{ \textstyle\begin{array}{l} \phi(u_{n},y)+ \frac{1}{r_{n}} \langle y-u_{n},u_{n}-x_{n} \rangle\geq 0, \\ w_{n}= \alpha_{n} x_{n}+(1-\alpha_{n}) W_{n} P_{K} (u_{n}-\lambda_{n} Au_{n}), \\ K_{n+1}=\{z\in K_{n} : \| w_{n}-z\|\leq\| x_{n}-z\|\}, \\ x_{n+1}= P_{K_{n+1}} (x_{0}). \end{array}\displaystyle \right . $$

He showed that under suitable conditions, the above algorithm strongly converges to \(\bigcap_{i=1}^{N} F(T_{i}) \cap \operatorname{EP}(\phi) \cap \operatorname{VI}(K, A)\), where for each \(i=1,\ldots,N\), \(T_{i}\) is a nonexpansive mapping and A is an α-inverse strongly monotone mapping.

In this paper, we present an iterative algorithm for finding a common solution of a system of finite generalized mixed equilibrium problems, a variational inequality problem for an inverse strongly monotone mapping and common fixed points of a finite family of strictly pseudo contractive mappings. We show that the algorithm strongly converges to a solution of the problem under certain conditions. Our results modify, improve and extend corresponding results of Takahashi and Takahashi [18], Zhang et al. [19], Shehu [20], Thianwan [16], and others. The rest of the paper is organized as follows. Section 2 briefly explains the necessary mathematical background. Section 3 presents the main results. A numerical example is provided in the final section.

2 Preliminaries

It is well known that in a (real) Hilbert space H

$$ \|x + y\|^{2} \leq\|x\|^{2} + 2\langle y, x + y\rangle, $$
(2.1)

for all \(x, y \in H\) [12]. Furthermore, it is easy to see that

$$ \Biggl\Vert \sum_{i=1}^{m} x_{i}\Biggr\Vert ^{2}=\sum_{i,j=1}^{m} \langle x_{i},x_{j}\rangle. $$
(2.2)

Lemma 2.1

([13])

Let \(\{a_{n}\}\), \(\{b_{n}\}\), and \(\{c_{n}\}\) be three nonnegative real sequences satisfying

$$ a_{n+1} \leq(1-t_{n})a_{n}+b_{n}+c_{n} $$

with \(\{t_{n}\}\subset[0, 1]\), \(\sum_{n=1}^{\infty} t_{n} = \infty\), \(b_{n}=o(t_{n})\), and \(\sum_{n=1}^{\infty} c_{n} < \infty\). Then \(\lim_{n\rightarrow\infty}a_{n}=0 \).

Lemma 2.2

([21])

Let H be a (real) Hilbert space and \(\{x_{n}\}_{n=1}^{N}\) be a bounded sequence in H. Let \(\{a_{n}\} _{n=1}^{N}\) be a sequence of real numbers such that \(\sum_{n=1}^{N} a_{n}=1\). Then

$$\Biggl\Vert \sum_{i=1}^{N} a_{i} x_{i}\Biggr\Vert ^{2}\leq\sum _{i=1}^{N} a_{i} \Vert x_{i} \Vert ^{2}. $$

Lemma 2.3

([22])

Let \(\{x_{n}\}\) and \(\{z_{n}\}\) be bounded sequences in a Banach space and \(\beta_{n}\) be a sequence of real numbers such that \(0<\liminf_{n\rightarrow\infty}\beta_{n}<\limsup_{n\rightarrow\infty}\beta_{n}<1\) for all \(n \geq0\). Suppose that \(x_{n+1} = (1-\beta_{n})z_{n} + \beta_{n}x_{n}\) for all \(n \geq0\) and \(\limsup_{n\rightarrow\infty}(\|z_{n+1}-z_{n}\|-\|x_{n+1}-x_{n}\|)\leq0\). Then \(\lim_{n\rightarrow\infty}\|z_{n}-x_{n}\|=0 \).

Let us assume that the bifunction Θ satisfies the following conditions:

  1. (A1)

    \(\Theta(x,x)=0\), \(\forall x\in K\);

  2. (A2)

    Θ is monotone on K, i.e., \(\Theta (x,y)+\Theta(y,x)\leq0\), \(\forall x,y\in K\);

  3. (A3)

    for all \(x,y,z\in K\), \(\lim_{t\rightarrow0^{+}}\Theta (tz+(1-t)x ,y)\leq\Theta(x,y)\);

  4. (A4)

    for all \(x\in K\), \(y\mapsto\Theta(x,y)\) is convex and lower semicontinuous.

Lemma 2.4

([1])

Let K be a nonempty closed convex subset of Hilbert space H and Θ be a real valued bifunction on \(K\times K\) satisfying (A1)-(A4). Let \(r>0\) and \(x\in H\), then there exists \(z\in K\) such that

$$ \Theta(z,y)+\frac{1}{r} \langle y-z ,z-x\rangle\geq0 ,\quad \forall y\in K. $$

Lemma 2.5

([2])

Suppose all conditions in Lemma  2.4 are satisfied. For any given \(r>0\), define a mapping \(T_{r}:H\rightarrow K\) as

$$T_{r} x=\biggl\{ z\in K : \Theta(z,y)+\frac{1}{r} \langle y-z ,z-x\rangle\geq0, \forall y\in K\biggr\} , $$

for all \(x\in H\). Then the following conditions hold:

  1. 1.

    \(T_{r}\) is single valued;

  2. 2.

    \(T_{r}\) is firmly nonexpansive, i.e.,

    $$\|T_{r} x-T_{r} y\|^{2}\leq\langle T_{r} x-T_{r} y,x-y\rangle, \quad \forall x,y\in H; $$
  3. 3.

    \(F(T_{r})=\operatorname{EP}(\Theta)\);

  4. 4.

    \(\operatorname{EP}(\Theta)\) is a closed and convex set.

Remark 2.6

For the generalized mixed equilibrium problem (1.2), if the nonlinear operator A is a monotone, Lipschitz continuous mapping, φ is a convex and lower semicontinuous function, and the real valued bifunction Θ admits the conditions (A1)-(A4), then it is easy to show that \(G(x,y)=\Theta(x,y)+\langle A x,y-x \rangle+ \varphi (y)-\varphi(x)\) also satisfies the conditions (A1)-(A4), and the generalized mixed equilibrium (1.2) is still the following equilibrium problem:

$$ \mbox{find}\quad x\in K \quad \mbox{such that} \quad G(x,y)\geq0, \quad \forall y\in K. $$

3 Main results

As is well known, the strict pseudo contraction mappings have more useful applications than nonexpansive mappings like in solving inverse problems [23]. In addition, various problems reduced to find the common element of the fixed point set of a family of nonlinear mappings such as image restoration (see for example [24]). For construction an algorithm which can used to obtain the fixed point set of a family of strictly pseudo contractive mappings we need to introduce the following proposition.

In the sequel, \(I=\{1,2,\ldots,m\}\) and \(J=\{1,2,\ldots,l\}\) are two index sets.

Proposition 3.1

Let \(T_{j}:K\rightarrow K\), \(j\in J\), be \(k_{j}\)-strict pseudo contractive mappings. Define \(S:K\rightarrow K\) by \(S=\gamma_{0}I+\gamma_{1}T_{1}+\cdots+\gamma_{l}T_{l}\), where the \(\{\gamma_{j}\}\), \(j\in J\), are in \((0,1)\) and, for each \(n\in N\), \(\sum_{j=0}^{l} \gamma_{j}=1\). If \(\gamma_{0}\in[k,1)\) such that \(k=\max\{ k_{1}, \ldots,k_{l}\}\), then S is a nonexpansive mapping and \(F(S)=\bigcap_{j\in J}F(T_{j})\).

Proof

By the definition of the mapping S, we have

$$\begin{aligned} \|S x-S y\|^{2} =&\gamma^{2}_{0} \|x-y\|^{2}+\biggl\| \sum_{j\in J} \gamma_{j} (T_{j} x-T_{j} y)\biggr\| ^{2} \\ &{} +2\gamma_{0} \sum_{j\in J} \gamma_{j} \langle x-y ,T_{j} x-T_{j} y\rangle. \end{aligned}$$
(3.1)

On the other hand, from (1.4) and (2.2) we have

$$\begin{aligned} \biggl\Vert \sum_{j\in J} \gamma_{j} (T_{j} x-T_{j} y)\biggr\Vert ^{2} =&\sum_{j,i\in J} \gamma _{j} \gamma_{i} \langle T_{j} x-T_{j} y,T_{i} x-T_{i} y\rangle \\ \leq&\sum_{j,i\in J} \gamma_{j} \gamma_{i} \Vert T_{j} x-T_{j} y\Vert \Vert T_{i} x-T_{i} y\Vert \\ \leq&\frac{1}{2}\sum_{j,i\in J} \gamma_{j}\gamma_{i}\bigl(\Vert T_{j} x-T_{j} y\Vert ^{2}+\Vert T_{i} x-T_{i} y\Vert ^{2}\bigr) \\ \leq&\frac{1}{2}\sum_{j,i\in J} \gamma_{j}\gamma_{i} \bigl[ \Vert x-y\Vert ^{2}+k_{j} \bigl\Vert (x-y)-(T_{j} x-T_{j} y) \bigr\Vert ^{2} \\ &{} + \Vert x-y\Vert ^{2}+k_{i} \bigl\Vert (x-y)-(T_{i} x-T_{i} y) \bigr\Vert ^{2}\bigr] \\ =&\sum_{j,i\in J}\gamma_{j} \gamma_{i} \Vert x-y\Vert ^{2} \\ &{}+\sum _{j\in J} \gamma_{j} \sum_{i\in J} \gamma_{i} k_{j} \bigl\Vert (x-y)-(T_{j} x-T_{j} y) \bigr\Vert ^{2}. \end{aligned}$$
(3.2)

Furthermore, (1.5) implies that, for each \(j\in J\),

$$ \langle x-y ,T_{j} x-T_{j} y\rangle\leq \Vert x-y\Vert ^{2}-\frac{1-k_{j} }{2}\bigl\Vert (x-y)-(T_{j} x-T_{j} y) \bigr\Vert ^{2}. $$
(3.3)

By substituting (3.2) and (3.3) in (3.1), we have

$$\begin{aligned} \Vert S x-S y\Vert ^{2} \leq&\biggl( \gamma^{2}_{0}+\sum_{j,i\in J} \gamma_{j}\gamma_{i}+2\sum_{j\in J} \gamma_{0} \gamma_{j}\biggr)\Vert x-y\Vert ^{2} \\ &{} +\sum_{j\in J} \gamma_{j} \sum _{i\in J} \gamma_{i} k_{j} \bigl\Vert (x-y)-(T_{j} x-T_{j} y) \bigr\Vert ^{2} \\ &{} -\sum_{j \in J}\gamma_{0} \gamma_{j} (1-k_{j}) \bigl\Vert (x-y)-(T_{j} x-T_{j} y) \bigr\Vert ^{2} \\ =& \Vert x-y\Vert ^{2}-\sum_{j\in J} \gamma_{j} \biggl[\gamma_{0} (1-k_{j})-\sum _{i\in J} \gamma_{i} k_{j}\biggr] \bigl\Vert (x-y)-(T_{j} x-T_{j} y) \bigr\Vert ^{2} \\ =&\Vert x-y\Vert ^{2}-\sum_{j\in J} \gamma_{j} \bigl[\gamma_{0} (1-k_{j})-(1- \gamma_{0})k_{j}\bigr] \bigl\Vert (x-y)-(T_{j} x-T_{j} y) \bigr\Vert ^{2} \\ =&\Vert x-y\Vert ^{2}-\sum_{j\in J} \gamma_{j} (\gamma_{0}-k_{j})\bigl\Vert (x-y)-(T_{j} x-T_{j} y) \bigr\Vert ^{2} \\ \leq&\Vert x-y\Vert ^{2}-\sum_{j\in J} \gamma_{j} (\gamma_{0}-k)\bigl\Vert (x-y)-(T_{j} x-T_{j} y) \bigr\Vert ^{2} \\ \leq&\Vert x-y\Vert ^{2}. \end{aligned}$$
(3.4)

Then S is a nonexpansive mapping. Now, by the definition of S we obtain \(I-S=\sum_{j\in J} \gamma_{j} (I-T_{j} )\) and clearly \(F(S)=\bigcap_{j\in J}F(T_{j})\). □

Theorem 3.2

Let \(\Theta_{i}:K\times K\rightarrow R\), \(i\in I\), be bifunctions satisfying (A1)-(A4). Suppose that, for each \(i\in I\), the \(B_{i}\) are \(\theta_{i}\)-inverse strongly monotone mappings, the \(C_{i}\) are monotone and Lipschitz continuous mappings from K into H, and the \(\varphi_{i}\) are convex and lower semicontinuous functions from K into R. Let \(T_{j}:K\rightarrow K\), \(j\in J\), be \(k_{j}\)-strict pseudo contractive mappings and \(A:K\rightarrow H\) be a σ-inverse strongly monotone mapping. Let \(f:K\rightarrow K\) be an ε-contraction mapping and \(\{v_{n}\}\) be a convergent sequence in K with limit point v. Suppose that \(\Omega=\bigcap_{i\in I}\operatorname{GMEP}(\Theta_{i},B_{i},C_{i},\varphi_{i})\cap\bigcap_{j\in J}F(T_{j})\cap \operatorname{VI}(A,K) \) is nonempty. For any initial guess \(x_{1}\in K\), define the sequence \(\{x_{n} \}\) by

$$ \left \{ \textstyle\begin{array}{l} \Theta_{i} (u_{n,i},y)+\langle C_{i} u_{n,i}+ B_{i} x_{n},y-u_{n,i} \rangle+ \varphi_{i}(y)-\varphi_{i}(u_{n,i}) \\ \quad {}+ \frac{1}{r_{n,i}} \langle y-u_{n,i},u_{n,i}-x_{n} \rangle\geq0,\quad \forall y\in K, \forall i\in I, \\ y_{n}=\alpha_{n} v_{n}+(I-\alpha_{n}(I-f))P_{K} (\sum_{i\in I} \delta _{n,i}u_{n,i}-\lambda_{n} A\sum_{i\in I}\delta_{n,i}u_{n,i} ), \\ x_{n+1}= \beta_{n} x_{n}+(1-\beta_{n}) (\gamma_{0} I+\sum_{j\in J}\gamma _{j}T_{j})P_{K} (y_{n}-\lambda_{n} Ay_{n}), \end{array}\displaystyle \right . $$
(3.5)

where for all \(n\in N\), \(\{\lambda_{n}\},\{r_{n,i}\}_{i\in I}\subseteq (0,\infty)\), and \(\{\alpha_{n}\}, \{\beta_{n}\}, \{\delta_{n,i}\}_{i\in I}, \{\gamma_{j}\}_{j\in J}\subseteq(0,1)\) are sequences satisfying the following control conditions:

  1. 1.

    \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), \(\sum_{n=1}^{\infty}\alpha _{n}=\infty\);

  2. 2.

    \(0<\liminf_{n\rightarrow\infty}\beta_{n}\leq\limsup_{n\rightarrow \infty}\beta_{n}<1\);

  3. 3.

    for some \(a,b\in(0,2\sigma)\), \(\lambda_{n}\in[a,b]\) and \(\lim_{n\rightarrow\infty}|\lambda_{n+1}-\lambda_{n}|=0\);

  4. 4.

    for some \(d>0\), \(0< d\leq\delta_{n,i}\leq1\), \(\sum_{i\in I}\delta _{n,i}=1\) and \(\sum_{n=1}^{\infty}|\delta_{n+1,i}-\delta_{n,i}|<\infty\);

  5. 5.

    for some \(c>0\), \(k\leq\gamma_{0}\leq c<1\) such that \(k=\max_{j\in J}\{k_{j}\}\) and \(\sum_{j\in J}\gamma_{j}=1\);

  6. 6.

    for some \(\tau_{i},\rho_{i}\in(0,2\theta_{i})\), \(r_{n,i}\in[\tau_{i},\rho _{i}]\) and \(\sum_{n=1}^{\infty}|r_{n+1,i}-r_{n,i}|<\infty\), \(i\in I\).

Then the sequences \(\{x_{n} \}\) converges strongly to \(z\in\Omega\), where \(z=P_{\Omega}( v+f(z))\).

Proof

For \(x,y\in K\) and \(i\in I\), put \(G_{i}(x,y)=\Theta_{i} (x,y)+\langle C_{i} x,y-x \rangle+ \varphi_{i}(y)-\varphi_{i}(x)\). By Remark 2.6, \(G_{i}\) satisfies the conditions (A1)-(A4) and so the algorithm (3.5) can be rewritten as follows:

$$ \left \{ \textstyle\begin{array}{l} G_{i} (u_{n,i},y)+ \langle B_{i} x_{n},y-u_{n,i} \rangle+ \frac{1}{r_{n,i}} \langle y-u_{n,i},u_{n,i}-x_{n} \rangle\geq0,\quad \forall y\in K, i\in I, \\ y_{n}=\alpha_{n} v_{n}+(I-\alpha_{n}(I-f))P_{K} (\sum_{i\in I} \delta _{n,i}u_{n,i}-\lambda_{n} A\sum_{i\in I}\delta_{n,i}u_{n,i} ), \\ x_{n+1}= \beta_{n} x_{n}+(1-\beta_{n}) (\gamma_{0} I+\sum_{j\in J}\gamma _{j}T_{j})P_{K} (y_{n}-\lambda_{n} Ay_{n}). \end{array}\displaystyle \right . $$
(3.6)

Claim 1

The sequences \(\{x_{n} \}\), \(\{y_{n} \}\), \(\{u_{n} \}\), \(\{t_{n} \}\), and \(\{k_{n} \}\) are bounded where, for each \(n\in N\), \(u_{n}=\sum_{i\in I}\delta_{n,i}u_{n,i}\), \(t_{n}=P_{K} (y_{n}-\lambda_{n} Ay_{n})\), and \(k_{n}=P_{K} (u_{n}-\lambda_{n} Au_{n} )\).

To prove the claim from (3.6) we have

$$ G_{i} (u_{n,i},y)+ \frac{1}{r_{n,i}} \bigl\langle y-u_{n,i},u_{n,i}-(I-r_{n,i} B_{i})x_{n} \bigr\rangle \geq0, \quad \forall y\in K, i\in I. $$
(3.7)

Then, by using Lemma 2.5, for each \(i\in I\), we have \(u_{n,i}=T_{r_{n,i}}(x_{n}-r_{n,i} B_{i} x_{n})\), and, for any \(q\in\Omega\), \(q=T_{r_{n,i}}(q-r_{n,i} B_{i} q)\). Thus

$$\begin{aligned} \|u_{n,i}-q\|^{2} &=\bigl\Vert T_{r_{n,i}}(x_{n}-r_{n,i} B_{i} x_{n} )-T_{r_{n,i} } (q-r_{n,i} B_{i} q)\bigr\Vert ^{2} \\ &\leq\bigl\Vert (x_{n}-r_{n,i} B_{i} x_{n})-(q-r_{n,i} B_{i} q)\bigr\Vert ^{2} \\ &\leq\|x_{n}-q\|^{2}+r_{n,i}^{2} \|B_{i} x_{n}-B_{i} q\|^{2}-2r_{n,i} \langle x_{n}-q ,B_{i} x_{n}-B_{i} q \rangle \\ &\leq\|x_{n}-q\|^{2}+r_{n,i}^{2} \|B_{i} x_{n}-B_{i} q\|^{2}-2{r_{n,i}} \theta_{i}\| B_{i} x_{n}-B_{i} q \|^{2} \\ &=\|x_{n}-q\|^{2}+r_{n,i} (r_{n,i}-2 \theta_{i})\|B_{i} x_{n}-B_{i} q \|^{2} \\ &\leq\|x_{n}-q\|^{2}. \end{aligned}$$
(3.8)

So, we have

$$ \|u_{n}-q\|^{2}\leq\sum _{i\in I} \delta_{n,i}\|u_{n,i}-q \|^{2}\leq\sum_{i\in I}\delta_{n,i} \|x_{n}-q\|^{2}=\|x_{n}-q\|^{2}. $$
(3.9)

By the definition of \(t_{n}\) and \(k_{n}\) we have

$$\begin{aligned} \|t_{n}-q\|&\leq\bigl\Vert (y_{n}- \lambda_{n} Ay_{n} )-(q-\lambda_{n} Aq )\bigr\Vert \\ &\leq\|y_{n}-q\| \end{aligned}$$
(3.10)

and

$$\begin{aligned} \|k_{n}-q\|&\leq\bigl\Vert (u_{n}- \lambda_{n} Au_{n} )-(q-\lambda_{n} Aq )\bigr\Vert \\ &\leq\|u_{n}-q\|. \end{aligned}$$
(3.11)

Since \(\lim_{n\rightarrow\infty}v_{n}=v\), \(\{v_{n}\}\) is bounded,

$$\begin{aligned} \Vert y_{n}-q\Vert &\leq\alpha_{n} \Vert v_{n}-q\Vert +\alpha_{n} \bigl\Vert f(k_{n})-q\bigr\Vert +(1-\alpha_{n})\Vert k_{n}-q\Vert \\ &\leq\alpha_{n} M_{1}+\alpha_{n} \varepsilon \Vert k_{n}-q\Vert +\alpha_{n} \bigl\Vert f(q)-q\bigr\Vert +(1-\alpha_{n})\Vert x_{n}-q\Vert \\ &=\bigl[1-\alpha_{n}(1-\varepsilon)\bigr]\Vert x_{n}-q \Vert +\alpha_{n}\bigl( M_{1}+\bigl\Vert f(q)-q\bigr\Vert \bigr) \\ &\leq\max\biggl\{ \Vert x_{n}-q\Vert , \frac{1}{1-\varepsilon}\bigl( M_{1}+\bigl\Vert f(q)+q\bigr\Vert \bigr)\biggr\} , \end{aligned}$$
(3.12)

where \(M_{1}=\sup_{n\geq1}\{\|v_{n}-q\|\}\). Putting \(S=\gamma_{0} I+\sum_{j\in J}\gamma_{j}T_{j}\), by Proposition 3.1, S is nonexpansive. On the other hand, for all \(n\in N\), we have

$$\begin{aligned} \Vert x_{n+1}-q\Vert &\leq\beta_{n}\Vert x_{n}-q\Vert +(1-\beta_{n})\Vert St_{n}-q \Vert \\ &\leq\beta_{n}\Vert x_{n}-q\Vert +(1- \beta_{n})\Vert t_{n}-q\Vert \\ &\leq\beta_{n}\Vert x_{n}-q\Vert +(1- \beta_{n})\Vert y_{n}-q\Vert \\ &\leq\max\biggl\{ \Vert x_{n}-q\Vert , \frac{1}{1-\varepsilon}\bigl( M_{1}+\bigl\Vert f(q)+q\bigr\Vert \bigr)\biggr\} . \end{aligned}$$
(3.13)

By induction, we deduce that

$$ \Vert x_{n+1}-q\Vert \leq\max\biggl\{ \Vert x_{1}-q\Vert , \frac{1}{1-\varepsilon}\bigl( M_{1}+\bigl\Vert f(q)+q\bigr\Vert \bigr)\biggr\} ,\quad \forall n\in N. $$
(3.14)

Therefore, \(\{x_{n}\}\) is bounded, and so are \(\{y_{n} \}\), \(\{u_{n} \}\), \(\{ u_{n,i}\}\), \(\{t_{n} \}\), and \(\{k_{n} \}\).

Claim 2

\(\|x_{n+1}-x_{n}\|\rightarrow0\) as \(n\rightarrow\infty\).

Let \(z_{n}=\frac{1}{1-\beta_{n}}x_{n+1}-\frac{\beta_{n}}{1-\beta_{n}}x_{n} \). Hence

$$\begin{aligned} \Vert z_{n+1}-z_{n}\Vert &=\biggl\Vert \frac{1}{1-\beta_{n+1}}(x_{n+2}-\beta _{n+1}x_{n+1})- \frac{1}{1-\beta_{n}}(x_{n+1}-\beta_{n} x_{n})\biggr\Vert \\ &=\Vert St_{n+1}-S t_{n}\Vert \\ &\leq \Vert t_{n+1}-t_{n}\Vert . \end{aligned}$$
(3.15)

Now, by the definition of \(t_{n}\) we have

$$\begin{aligned} \Vert t_{n+1}-t_{n}\Vert &\leq\bigl\Vert (y_{n+1}-\lambda_{n+1} Ay_{n+1})-(y_{n}- \lambda_{n} Ay_{n})\bigr\Vert \\ &\leq \Vert y_{n+1}-y_{n}\Vert +|\lambda_{n+1}- \lambda_{n}| \Vert Ay_{n}\Vert . \end{aligned}$$
(3.16)

Similarly,

$$ \|k_{n+1}-k_{n}\|\leq\|u_{n+1}-u_{n} \|+|\lambda_{n+1}-\lambda_{n}| \|Au_{n}\|. $$
(3.17)

By (3.17) and the definition of \(y_{n}\) we obtain

$$\begin{aligned} \Vert y_{n+1}-y_{n}\Vert \leq& \alpha_{n+1}\mu \Vert v_{n+1}-v_{n}\Vert +|\alpha _{n+1}-\alpha_{n}| \Vert v_{n}\Vert +| \alpha_{n+1}-\alpha_{n}| \bigl\Vert f(k_{n})\bigr\Vert \\ &{} +|\alpha_{n+1}-\alpha_{n}| \Vert k_{n} \Vert +\bigl\Vert \bigl(I-\alpha _{n+1}(I-f)\bigr) (k_{n+1})-\bigl(I-\alpha_{n+1}(I-f)\bigr) (k_{n}) \bigr\Vert \\ \leq&\alpha_{n+1} \Vert v_{n+1}-v_{n}\Vert +| \alpha_{n+1}-\alpha_{n}|\bigl( \Vert v_{n}\Vert +\bigl\Vert f(k_{n})\bigr\Vert + \Vert k_{n}\Vert \bigr) \\ &{} +\bigl(1-\alpha_{n+1}(1-\varepsilon)\bigr)\Vert k_{n+1}-k_{n}\Vert \\ \leq&\alpha_{n+1} \Vert v_{n+1}-v_{n}\Vert +| \alpha_{n+1}-\alpha_{n}|\bigl( \Vert v_{n}\Vert +\bigl\Vert f(k_{n})\bigr\Vert + \Vert k_{n}\Vert \bigr) \\ &{} +\bigl(1-\alpha_{n+1}(1-\varepsilon)\bigr) \bigl(\Vert u_{n+1}-u_{n}\Vert +|\lambda _{n+1}- \lambda_{n}| \Vert Au_{n}\Vert \bigr). \end{aligned}$$
(3.18)

Furthermore, by the definition of \(u_{n}\),

$$\begin{aligned} \Vert u_{n+1}-u_{n}\Vert &=\biggl\Vert \sum_{i\in I}(\delta_{n+1,i}u_{n+1,i}- \delta _{n,i}u_{n,i})\biggr\Vert \\ &\leq\biggl\Vert \sum_{{i\in I}}\delta_{n+1,i}(u_{n+1,i}-u_{n,i}) \biggr\Vert +\biggl\Vert \sum_{{i\in I}}( \delta_{n+1,i}-\delta_{n,i})u_{n,i}\biggr\Vert \\ &\leq\sum_{{i\in I}}\delta_{n+1,i}\Vert u_{n+1,i}-u_{n,i}\Vert +\sum_{{i\in I}} |\delta_{n+1,i}-\delta_{n,i}| \Vert u_{n,i}\Vert . \end{aligned}$$
(3.19)

From (3.7), since for each \(i\in I\), \(u_{n,i}, u_{{n+1},i}\in K\),

$$ G_{i}(u_{{n+1},i},u_{n,i})+ \frac{1}{r_{{n+1},i}}\bigl\langle u_{n,i}-u_{{n+1},i},u_{{n+1},i}-(I-r_{{n+1},i}B_{i})x_{n+1} \bigr\rangle \geq0 $$
(3.20)

and

$$ G_{i}(u_{n,i},u_{{n+1},i})+ \frac{1}{r_{n,i}}\bigl\langle u_{{n+1},i}-u_{n,i},u_{n,i}-(I-r_{n,i}B_{i})x_{n} \bigr\rangle \geq0. $$
(3.21)

By adding the two inequalities (3.20), (3.21), and the monotonicity of \(G_{i}\) we have

$$ \biggl\langle u_{{n+1},i}-u_{n,i},\frac {u_{n,i}-(I-r_{n,i}B_{i})x_{n}}{r_{n,i}}- \frac {u_{{n+1},i}-(I-r_{{n+1},i}B_{i})x_{n+1}}{r_{{n+1},i}} \biggr\rangle \geq0, \quad \forall i\in I. $$

So

$$ \biggl\langle u_{{n+1},i}-u_{n,i},u_{n,i}-(I-r_{n,i}B_{i})x_{n}-r_{n,i}B_{i}x_{n+1}- \frac {r_{n,i}}{r_{{n+1},i}}(u_{{n+1},i}-x_{n+1}) \biggr\rangle \geq0, \quad \forall i\in I. $$

Thus, for each \(i\in I\),

$$ \biggl\langle u_{{n+1},i}-u_{n,i},(I-r_{n,i}B_{i})x_{n+1}-(I-r_{n,i}B_{i})x_{n}+(u_{n,i}-u_{{n+1},i})+ \biggl(1- \frac{r_{n,i}}{r_{{n+1},i}}\biggr) (u_{{n+1},i}-x_{n+1}) \biggr\rangle \geq0. $$

This yields

$$\begin{aligned} \Vert u_{{n+1},i}-u_{n,i}\Vert ^{2} \leq&\biggl\langle u_{{n+1},i}-u_{n,i},(I-r_{n,i}B_{i})x_{n+1}-(I-r_{n,i}B_{i})x_{n} \\ &{} +\biggl(1- \frac{r_{n,i}}{r_{{n+1},i}}\biggr) (u_{{n+1},i}-x_{n+1}) \biggr\rangle \\ \leq&\Vert u_{{n+1},i}-u_{n,i}\Vert \biggl[ \bigl\Vert (I-r_{n,i}B_{i})x_{n+1}-(I-r_{n,i}B_{i})x_{n} \bigr\Vert \\ &{} +\biggl|1- \frac{r_{n,i}}{r_{{n+1},i}}\biggr| \Vert u_{{n+1},i}-x_{n+1}\Vert \biggr] \\ \leq&\Vert u_{{n+1},i}-u_{n,i}\Vert \biggl[\Vert x_{n+1}-x_{n}\Vert +\biggl|1- \frac {r_{n,i}}{r_{{n+1},i}}\biggr| \Vert u_{{n+1},i}-x_{n+1}\Vert \biggr],\quad \forall i\in I, \end{aligned}$$

or

$$\begin{aligned} \Vert u_{{n+1},i}-u_{n,i}\Vert &\leq \Vert x_{n+1}-x_{n}\Vert +\frac {1}{r_{{n+1},i}}|r_{{n+1},i}-r_{n,i}| \Vert u_{{n+1},i}-x_{n+1}\Vert \\ &\leq \Vert x_{n+1}-x_{n}\Vert +\frac{1}{\tau}|r_{{n+1},i}-r_{n,i}|M_{2}, \quad \forall i\in I, \end{aligned}$$
(3.22)

where \(\tau=\inf_{n\geq1}\{r_{{n},i}\}\) and \(M_{2}=\sup_{n\geq1}\{\| u_{n,i}-x_{n}\|\}\). Thus, from (3.15), (3.16), (3.18), (3.19), and (3.22) we obtain

$$\begin{aligned} \Vert z_{n+1}-z_{n}\Vert \leq&\Vert x_{n+1}-x_{n}\Vert +\alpha_{n+1} \Vert v_{n+1}-v_{n}\Vert \\ &{}+|\alpha_{n+1}- \alpha_{n}|\bigl(\Vert v_{n}\Vert +\Vert k_{n}\Vert +\bigl\Vert f(k_{n})\bigr\Vert \bigr) \\ &{} +|\lambda_{n+1}-\lambda_{n}| \Vert Ay_{n} \Vert +\bigl(1-\alpha _{n+1}(1-\varepsilon)\bigr) \biggl[ \sum _{i\in I}\delta_{n+1,i}\frac{1}{\tau }|r_{{n+1},i}-r_{n,i}|M_{2} \\ &{} +|\lambda_{n+1}-\lambda_{n}| \Vert Au_{n} \Vert +\sum_{i\in I} |\delta _{n+1,i}- \delta_{n,i}| \Vert u_{n,i}\Vert \biggr]. \end{aligned}$$

So, by assumptions 1-6 of the theorem

$$ \limsup_{n\rightarrow\infty}\bigl\{ \Vert z_{n+1}-z_{n} \Vert -\Vert x_{n+1}-x_{n}\Vert \bigr\} \leq0, $$

and by Lemma 2.3, we have

$$ \lim_{n\rightarrow\infty}\|z_{n}-x_{n}\|=0. $$

But, since \(x_{n+1}-x_{n}=(1-\beta_{n})(z_{n}-x_{n})\), we have

$$ \lim_{n\rightarrow\infty}\|x_{n+1}-x_{n} \|=0. $$
(3.23)

Claim 3

\(\lim_{n\rightarrow\infty}\|x_{n}-S x_{n}\|=0\).

Note that

$$\begin{aligned} \|x_{n}-S x_{n}\|&\leq\|x_{n+1}-x_{n} \|+\|x_{n+1}-S t_{n}\|+\|S t_{n}-S x_{n} \| \\ & \leq\|x_{n+1}-x_{n}\|+\|x_{n+1}-S t_{n}\|+\| t_{n}- x_{n}\|. \end{aligned}$$
(3.24)

First we show that \(\lim_{n\rightarrow\infty}\|x_{n+1}-S t_{n}\|=0\). From (3.5)

$$\begin{aligned} \|x_{n+1}-S t_{n}\|&\leq\beta_{n} \|x_{n}-S t_{n}\| \\ &\leq\beta_{n}\|x_{n}-x_{n+1}\|+ \beta_{n}\|x_{n+1}-S t_{n}\|. \end{aligned}$$

Hence

$$ \|x_{n+1}-S t_{n}\| \leq\frac{\beta_{n}}{1-\beta_{n}} \|x_{n}-x_{n+1}\|. $$

This implies that

$$ \lim_{n\rightarrow\infty}\|x_{n+1}-S t_{n} \|=0. $$
(3.25)

Now, we prove that \(\lim_{n\rightarrow\infty}\|t_{n}-x_{n}\|=0\). To do this, it suffices to show that \(\lim_{n\rightarrow\infty}\|x_{n}-u_{n}\|=0\) and \(\lim_{n\rightarrow\infty}\|u_{n}-t_{n}\|=0\). By the definition of \(t_{n}\) we have

$$\begin{aligned} \Vert t_{n}-q\Vert ^{2}&\leq\bigl\Vert (y_{n}-\lambda_{n} Ay_{n})-(q- \lambda_{n} Aq)\bigr\Vert ^{2} \\ &\leq \Vert y_{n}-q\Vert ^{2}+\lambda_{n}( \lambda_{n}-2\sigma)\Vert Ay_{n}-Aq\Vert ^{2} \\ &\leq \Vert x_{n}-q\Vert ^{2}+\lambda_{n}( \lambda_{n}-2\sigma)\Vert Ay_{n}-Aq\Vert ^{2}. \end{aligned}$$
(3.26)

So, by (3.26) and the convexity of \(\| \cdot \|^{2}\), we have

$$\begin{aligned} \|x_{n+1}-q\|^{2}&\leq\beta_{n} \|x_{n}-q\|^{2}+(1-\beta_{n})\|St_{n}-q \|^{2} \\ &\leq\beta_{n}\|x_{n}-q\|^{2}+(1- \beta_{n})\|t_{n}-q\|^{2} \\ &\leq\beta_{n}\|x_{n}-q\|^{2}+(1- \beta_{n}) \bigl(\|x_{n}-q\|^{2}+ \lambda_{n}(\lambda _{n}-2\sigma)\|Ay_{n}-Aq \|^{2}\bigr) \\ &=\|x_{n}-q\|^{2}+(1-\beta_{n}) \lambda_{n}(\lambda_{n}-2\sigma)\|Ay_{n}-Aq \|^{2}. \end{aligned}$$

Hence

$$\begin{aligned} (1-\beta_{n})\lambda_{n}(2\sigma-\lambda_{n}) \|Ay_{n}-Aq\|^{2} &\leq\|x_{n}-q\| ^{2}- \|x_{n+1}-q\|^{2} \\ &\leq\|x_{n}-x_{n+1}\|\bigl(\Vert x_{n}-q \Vert +\|x_{n+1}-q\|\bigr), \end{aligned}$$

and then

$$ \lim_{n\rightarrow\infty}\|Ay_{n}-Aq\|=0. $$
(3.27)

Using the projection properties gives us

$$\begin{aligned} \Vert t_{n}-q\Vert ^{2} =&\bigl\Vert P_{K}(y_{n}-\lambda_{n} Ay_{n})-P_{K}(q- \lambda_{n} Aq)\bigr\Vert ^{2} \\ \leq&\bigl\langle (y_{n}-\lambda_{n}Ay_{n})-(q- \lambda_{n}Aq), t_{n}-q\bigr\rangle \\ =& \frac{1}{2}\bigl[\bigl\Vert (y_{n}-\lambda_{n}Ay_{n})-(q- \lambda_{n}Aq)\bigr\Vert ^{2}+\Vert t_{n}-q \Vert ^{2} \\ &{} - \bigl\Vert (y_{n}-\lambda_{n}Ay_{n})-(q- \lambda_{n}Aq)-(t_{n}-q)\bigr\Vert ^{2}\bigr] \\ \leq&\frac{1}{2}\bigl[\Vert y_{n}-q\Vert ^{2}+ \Vert t_{n}-q\Vert ^{2}-\bigl\Vert y_{n}-t_{n}- \lambda _{n}(Ay_{n}-Aq)\bigr\Vert ^{2}\bigr] \\ \leq&\frac{1}{2}\bigl[\Vert y_{n}-q\Vert ^{2}+ \Vert t_{n}-q\Vert ^{2}-\Vert y_{n}-t_{n} \Vert ^{2}- \lambda_{n} ^{2} \Vert Ay_{n}-Aq\Vert ^{2} \\ &{} + 2\lambda_{n}\langle y_{n}-t_{n},Ay_{n}-Aq \rangle\bigr]. \end{aligned}$$

This implies that

$$\begin{aligned} \Vert t_{n}-q\Vert ^{2} \leq&\Vert y_{n}-q\Vert ^{2}-\Vert y_{n}-t_{n} \Vert ^{2}- \lambda_{n} ^{2} \Vert Ay_{n}-Aq\Vert ^{2} \\ &{} + 2\lambda_{n}\langle y_{n}-t_{n},Ay_{n}-Aq \rangle \\ \leq&\Vert y_{n}-q\Vert ^{2}-\Vert y_{n}-t_{n}\Vert ^{2}+ 2\lambda_{n} \langle y_{n}-t_{n},Ay_{n}-Aq\rangle. \end{aligned}$$
(3.28)

From (3.28) and the convexity of \(\| \cdot \|^{2}\), one can see that, for \(q\in\Omega\),

$$\begin{aligned} \Vert x_{n+1}-q\Vert ^{2} \leq&\beta_{n} \Vert x_{n}-q\Vert ^{2}+(1-\beta_{n})\Vert St_{n}-q\Vert ^{2} \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n})\Vert t_{n}-q\Vert ^{2} \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n})\bigl[\Vert y_{n}-q\Vert ^{2}- \Vert y_{n}-t_{n}\Vert ^{2} \\ &{} + 2\lambda_{n}\langle y_{n}-t_{n},Ay_{n}-Aq \rangle\bigr] \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n})\bigl[\alpha_{n} \Vert v_{n}-q\Vert ^{2}+\alpha_{n} \bigl\Vert f(k_{n})-q\bigr\Vert ^{2} \\ &{} +(1-\alpha_{n})\Vert k_{n}-q\Vert ^{2}- \Vert y_{n}-t_{n}\Vert ^{2}+ 2 \lambda_{n}\langle y_{n}-t_{n},Ay_{n}-Aq \rangle\bigr] \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n})\bigl[\alpha_{n} \Vert v_{n}-q\Vert ^{2}+\alpha_{n} \bigl\Vert f(k_{n})-q\bigr\Vert ^{2} \\ &{} +(1-\alpha_{n})\Vert x_{n}-q\Vert ^{2}- \Vert y_{n}-t_{n}\Vert ^{2}+ 2 \lambda_{n}\langle y_{n}-t_{n},Ay_{n}-Aq \rangle\bigr]. \end{aligned}$$

Hence

$$\begin{aligned} (1-\beta_{n})\Vert y_{n}-t_{n}\Vert ^{2} \leq&(1-\beta_{n})\bigl[\alpha_{n} \Vert v_{n}-q\Vert ^{2}+\alpha _{n} \bigl\Vert f(k_{n})-q\bigr\Vert ^{2} \\ &{} + 2\lambda_{n}\langle y_{n}-t_{n},Ay_{n}-Aq \rangle\bigr] \\ &{} +\Vert x_{n}-q\Vert ^{2}-\Vert x_{n+1}-q\Vert ^{2} \\ \leq&(1-\beta_{n})\bigl[\alpha_{n} \Vert v_{n}-q\Vert ^{2}+\alpha_{n} \bigl\Vert f(k_{n})-q\bigr\Vert ^{2} \\ &{} + 2\lambda_{n} \Vert y_{n}-t_{n}\Vert \Vert Ay_{n}-Aq\Vert \bigr] \\ &{} +\Vert x_{n}-x_{n+1}\Vert \bigl(\Vert x_{n}-q\Vert ^{2}+\Vert x_{n+1}-q\Vert ^{2}\bigr), \end{aligned}$$

and so by (3.27)

$$ \lim_{n\rightarrow\infty}\|y_{n}-t_{n} \|=0. $$
(3.29)

Next, we show that \(\lim_{n\rightarrow\infty}\|y_{n}-u_{n}\|=0\). The definition of \(k_{n}\) and a similar argument to (3.26) give us

$$ \|k_{n}-q\|^{2}\leq\|x_{n}-q \|^{2}+\lambda_{n}(\lambda_{n}-2\sigma) \|Au_{n}-Aq\|^{2}. $$
(3.30)

Then

$$\begin{aligned} \Vert x_{n+1}-q\Vert ^{2} \leq&\beta_{n} \Vert x_{n}-q\Vert ^{2}+(1-\beta_{n})\Vert St_{n}-q\Vert ^{2} \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n})\Vert t_{n}-q\Vert ^{2} \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n})\Vert y_{n}-q\Vert ^{2} \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n}) \bigl(\alpha_{n} \Vert v_{n}-q \Vert ^{2} \\ &{} +\alpha_{n} \bigl\Vert f(k_{n})-q\bigr\Vert ^{2}+(1-\alpha_{n})\Vert k_{n}-q\Vert ^{2}\bigr) \\ \leq&\Vert x_{n}-q\Vert ^{2}+(1-\beta_{n}) \bigl(\alpha_{n} \Vert v_{n}-q\Vert ^{2}+ \alpha_{n} \bigl\Vert f(k_{n})-q\bigr\Vert ^{2}\bigr) \\ &{} +(1-\beta_{n}) (1-\alpha_{n})\lambda_{n}( \lambda_{n}-2\sigma)\Vert Au_{n}-Aq\Vert ^{2}. \end{aligned}$$

Hence

$$\begin{aligned} (1-\beta_{n}) (1-\alpha_{n})\lambda_{n}(2 \sigma-\lambda_{n})\Vert Au_{n}-Aq\Vert ^{2}& \leq \Vert x_{n}-q\Vert ^{2}-\Vert x_{n+1}-q \Vert ^{2} \\ &\leq \Vert x_{n}-x_{n+1}\Vert \bigl(\Vert x_{n}-q\Vert +\Vert x_{n+1}-q\Vert \bigr), \end{aligned}$$

and therefore

$$ \lim_{n\rightarrow\infty}\|Au_{n}-Aq\|=0. $$
(3.31)

Similar to (3.28) we can see that

$$\begin{aligned} \|k_{n}-q\|^{2} \leq\|u_{n}-q \|^{2}-\|u_{n}-k_{n}\|^{2}+ 2 \lambda_{n}\langle u_{n}-k_{n},Au_{n}-Aq \rangle \\ \leq\|x_{n}-q\|^{2}-\|u_{n}-k_{n} \|^{2}+ 2\lambda_{n}\langle u_{n}-k_{n},Au_{n}-Aq \rangle. \end{aligned}$$
(3.32)

From (3.32) and the convexity of \(\| \cdot \|^{2}\), we have

$$\begin{aligned} \Vert x_{n+1}-q\Vert ^{2} \leq&\beta_{n} \Vert x_{n}-q\Vert ^{2}+(1-\beta_{n})\Vert St_{n}-q\Vert ^{2} \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n})\Vert t_{n}-q\Vert ^{2} \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n})\Vert y_{n}-q\Vert ^{2} \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n})\bigl[\alpha_{n} \Vert v_{n}-q\Vert ^{2} \\ &{} +\alpha_{n} \bigl\Vert f(k_{n})-q\bigr\Vert ^{2}+(1-\alpha_{n})\Vert k_{n}-q\Vert ^{2}\bigr] \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n})\bigl[\alpha_{n} \Vert v_{n}-q\Vert ^{2}+\alpha_{n} \bigl\Vert f(k_{n})-q\bigr\Vert ^{2} \\ &{} +(1-\alpha_{n}) \bigl(\Vert x_{n}-q\Vert ^{2}-\Vert u_{n}-k_{n}\Vert ^{2}+ 2 \lambda_{n}\langle u_{n}-k_{n},Au_{n}-Aq \rangle\bigr)\bigr] \\ \leq&\Vert x_{n}-q\Vert ^{2}+ (1-\beta_{n}) \bigl[\alpha_{n} \Vert v_{n}-q\Vert ^{2}+ \alpha_{n} \bigl\Vert f(k_{n})-q\bigr\Vert ^{2} \\ &{} +(1-\alpha_{n}) 2\lambda_{n}\langle u_{n}-k_{n},Au_{n}-Aq\rangle\bigr]-(1-\beta _{n}) (1-\alpha_{n})\Vert u_{n}-k_{n} \Vert ^{2}. \end{aligned}$$

So

$$\begin{aligned} (1-\beta_{n}) (1-\alpha_{n})\Vert u_{n}-k_{n} \Vert ^{2} \leq&(1-\beta_{n})\bigl[\alpha_{n} \Vert v_{n}-q\Vert ^{2}+\alpha_{n} \bigl\Vert f(k_{n})-q\bigr\Vert ^{2}\bigr] \\ &{} +(1-\beta_{n}) (1-\alpha_{n}) 2\lambda_{n} \langle u_{n}-k_{n},Au_{n}-Aq\rangle \\ &{} +\Vert x_{n}-q\Vert ^{2}-\Vert x_{n+1}-q\Vert ^{2} \\ \leq&(1-\beta_{n})\bigl[\alpha_{n} \Vert v_{n}-q\Vert ^{2}+\alpha_{n} \bigl\Vert f(k_{n})-q\bigr\Vert ^{2}\bigr] \\ &{} +(1-\beta_{n}) (1-\alpha_{n}) 2\lambda_{n} \Vert u_{n}-k_{n}\Vert \Vert Au_{n}-Aq \Vert \\ &{} +\Vert x_{n}-x_{n+1}\Vert \bigl(\Vert x_{n}-q\Vert +\Vert x_{n+1}-q\Vert \bigr). \end{aligned}$$

Then the above inequality and (3.31) imply that

$$ \lim_{n\rightarrow\infty}\|u_{n}-k_{n} \|=0. $$
(3.33)

But from (3.5),

$$ \Vert y_{n}-u_{n}\Vert \leq\alpha_{n} \Vert v_{n}-u_{n}\Vert +\alpha_{n} \bigl\Vert f(k_{n})-u_{n}\bigr\Vert +(1-\alpha _{n})\Vert k_{n}-u_{n}\Vert . $$

So, from (3.33) we have

$$ \lim_{n\rightarrow\infty}\|y_{n}-u_{n} \|=0. $$
(3.34)

Then by (3.29) and (3.34) we have

$$ \lim_{n\rightarrow\infty}\|u_{n}-t_{n} \|=0. $$
(3.35)

Now, we show that \(\lim_{n\rightarrow\infty}\|x_{n}-u_{n}\|=0\). To do this, note that, for any \(i\in I\),

$$\begin{aligned} \Vert u_{n,i}-q\Vert ^{2} =&\bigl\Vert T_{r_{n,i} } (x_{n}-r_{n,i} B_{i} x_{n} )-T_{r_{n,i} } (q-r_{n,i} B_{i}q)\bigr\Vert ^{2} \\ \leq&\bigl\langle T_{r_{n,i} } (x_{n}-r_{n,i} B_{i} x_{n} )-T_{r_{n,i} } (q-r_{n,i} B_{i}q),(x_{n}-q)-r_{n,i} (B_{i} x_{n}-B_{i}q)\bigr\rangle \\ = &\langle u_{n,i}-q ,x_{n}-q \rangle-r_{n,i} \langle u_{n,i}-q,B_{i} x_{n}-B_{i}q \rangle \\ \leq&\langle u_{n,i}-q ,x_{n}-q \rangle-r_{n,i} \theta_{i} \Vert B_{i}x_{n}-B_{i}q \Vert ^{2} \\ \leq&\langle u_{n,i}-q ,x_{n}-q \rangle. \end{aligned}$$
(3.36)

So, from (3.36) and the definition of \(u_{n}\), we obtain

$$\begin{aligned} \|u_{n}-q\|^{2}&\leq\sum_{i\in I} \delta_{n,i}\|u_{n,i}-q\|^{2} \\ &\leq\sum_{i\in I} \delta_{n,i}\langle u_{n,i}-q ,x_{n}-q \rangle \\ &=\biggl\langle \sum_{i\in I} \delta_{n,i}u_{n,i}-q ,x_{n}-q \biggr\rangle \\ &= \langle u_{n}-q ,x_{n}-q \rangle \\ &\leq\frac{1}{2} \bigl[\|u_{n}-q\|^{2}+ \|x_{n}-q\|^{2}-\|u_{n}-x_{n} \|^{2}\bigr]. \end{aligned}$$

Thus,

$$ \|u_{n}-q\|^{2} \leq\|x_{n}-q \|^{2}-\|u_{n}-x_{n} \|^{2}. $$
(3.37)

Since S is nonexpansive, we have

$$\begin{aligned} \Vert x_{n+1}-q\Vert ^{2} \leq&\beta_{n} \Vert x_{n}-q\Vert ^{2}+(1-\beta_{n})\Vert S t_{n}-q\Vert ^{2} \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n})\Vert t_{n}-q\Vert ^{2} \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n})\Vert y_{n}-q\Vert ^{2} \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n}) \bigl(\alpha_{n} \Vert v_{n}-q \Vert ^{2}+\alpha_{n} \bigl\Vert f(k_{n})-q \bigr\Vert ^{2}\bigr) \\ &{} +(1-\beta_{n}) (1-\alpha_{n})\Vert k_{n}-q\Vert ^{2} \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n}) \bigl(\alpha_{n} \Vert v_{n}-q \Vert ^{2}+\alpha_{n} \bigl\Vert f(k_{n})-q \bigr\Vert ^{2}\bigr) \\ &{} + (1-\beta_{n}) (1-\alpha_{n})\Vert u_{n}-q\Vert ^{2} \\ \leq&\beta_{n}\Vert x_{n}-q\Vert ^{2}+(1- \beta_{n}) \bigl(\alpha_{n} \Vert v_{n}-q \Vert ^{2}+\alpha_{n} \bigl\Vert f(k_{n})-q \bigr\Vert ^{2}\bigr) \\ &{} + (1-\beta_{n}) (1-\alpha_{n}) \bigl(\Vert x_{n}-q\Vert ^{2}-\Vert x_{n}-u_{n} \Vert ^{2}\bigr). \end{aligned}$$

Hence

$$\begin{aligned} (1-\beta_{n}) (1-\alpha_{n})\Vert x_{n}-u_{n} \Vert ^{2} \leq&(1-\beta_{n}) \bigl(\alpha_{n} \Vert v_{n}-q\Vert ^{2}+\alpha_{n} \bigl\Vert f(k_{n})-q\bigr\Vert ^{2}\bigr) \\ &{} + \Vert x_{n}-q\Vert ^{2}-\Vert x_{n+1}-q\Vert ^{2} \\ \leq&(1-\beta_{n}) \bigl(\alpha_{n} \Vert v_{n}-q\Vert ^{2}+\alpha_{n} \bigl\Vert f(k_{n})-q\bigr\Vert ^{2}\bigr) \\ &{} +\bigl(\Vert x_{n}-q\Vert -\Vert x_{n+1}-q\Vert \bigr) \bigl(\Vert x_{n}-q\Vert +\Vert x_{n+1}-q\Vert \bigr) \\ \leq&(1-\beta_{n}) \bigl(\alpha_{n} \Vert v_{n}-q\Vert ^{2}+\alpha_{n} \bigl\Vert f(k_{n})-q\bigr\Vert ^{2}\bigr) \\ &{} +\bigl(\Vert x_{n}-q\Vert +\Vert x_{n+1}-q\Vert \bigr)\Vert x_{n}-x_{n+1}\Vert , \end{aligned}$$

which yields

$$ \lim_{n\rightarrow\infty}\|x_{n}-u_{n} \|=0. $$
(3.38)

Since \(\|t_{n}-x_{n}\|\leq\|t_{n}-u_{n}\|+\|u_{n}-x_{n}\|\), from (3.35) and (3.38) we obtain

$$ \lim_{n\rightarrow\infty}\|t_{n}-x_{n} \|=0. $$
(3.39)

Inequality (3.24) and equations (3.25), (3.39), and \(\| x_{n}-x_{n+1}\|\rightarrow0\) imply that

$$ \lim_{n\rightarrow\infty}\|x_{n}-S x_{n} \|=0. $$
(3.40)

Claim 4

\(\limsup_{n\rightarrow\infty}\langle v+ f(z)-z,y_{n}-z\rangle\leq0\), where \(z=P_{\Omega}( v+ f(z))\).

To prove the claim, let \(\{y_{n_{k}}\}\) be a subsequence of \(\{y_{n}\}\) such that

$$ \limsup_{n\rightarrow\infty}\bigl\langle v+ f(z)-z,y_{n}-z\bigr\rangle =\limsup_{n\rightarrow\infty}\bigl\langle v+ f(z)-z,y_{n_{k}}-z\bigr\rangle . $$
(3.41)

By boundedness of \(\{y_{n_{k}}\}\), there exists a subsequence of \(\{ y_{n_{k}}\}\) which is weakly convergent to \(z_{0}\in K\). Without loss of generality, we can assume that \(y_{n_{k}}\rightharpoonup z_{0}\). So, (3.41) reduces to

$$ \limsup_{n\rightarrow\infty}\bigl\langle v+ f(z)-z,y_{n}-z\bigr\rangle =\bigl\langle v+ f(z)-z,z_{0}-z \bigr\rangle . $$
(3.42)

Therefore, by projection properties, to prove \(\langle v+ f(z)-z,z_{0}-z\rangle\geq0\), it suffices to show that \(z_{0}\in\Omega\).

(a) First we prove that \(z_{0}\in\bigcap_{j\in J}^{m}F(T_{j})\). From (3.40) and the demiclosedness property of S we obtain \(z_{0}\in F(S)\). So, by Proposition 3.1, \(z_{0}\in\bigcap_{j\in J}^{m}F(T_{j})\).

(b) Next we show that \(z_{0}\in \operatorname{VI}(A,K)\). Note that from boundedneess of \(\{x_{n}\}\), \(\{u_{n}\}\), and equation (3.33), there exist subsequences \(\{x_{n_{k}}\}\) and \(\{u_{n_{k}}\}\) of \(\{x_{n}\}\) and \(\{ u_{n}\}\), respectively, which converge weakly to \(z_{0}\). Suppose that \(N_{K}x\) is a normal cone to K at x and Q is a mapping defined by

$$ Q(x)= \left \{ \textstyle\begin{array}{l@{\quad}l} Ax+N_{K}x, & x\in K, \\ \emptyset,& x \notin K. \end{array}\displaystyle \right . $$
(3.43)

It is well known that Q is a maximal monotone mapping and \(0\in Q(x)\) if and only if \(x\in \operatorname{VI}(A,K)\). For details see [2]. If \((x,u)\in G(Q)\), then \(u-Ax\in N_{K}x\). Since \(k_{n}=P_{K} (u_{n}-\lambda _{n} Au_{n} )\in K\), we have

$$ \langle x-k_{n},u-Ax\rangle\geq0. $$
(3.44)

In addition, from projection properties we have \(\langle x-k_{n},k_{n}-(u_{n}-\lambda_{n} Au_{n} )\rangle \geq0 \). Then \(\langle x-k_{n},\frac{k_{n}-u_{n}}{\lambda_{n}}+Au_{n} \rangle\geq0\). Hence, from (3.44) we have

$$\begin{aligned} \langle x-k_{n_{k}},u\rangle \geq&\langle x-k_{n_{k}},Ax\rangle \\ \geq&\langle x-k_{n_{k}},Ax\rangle-\biggl\langle x-k_{n_{k}}, \frac {k_{n_{k}}-u_{n_{k}}}{\lambda_{n_{k}}}+Au_{n_{k}} \biggr\rangle \\ =& \langle x-k_{n_{k}},Ax-Ak_{n_{k}} \rangle+ \langle x-k_{n_{k}},Ak_{n_{k}}-Au_{n_{k}} \rangle \\ &{} - \biggl\langle x-k_{n_{k}}, \frac{k_{n_{k}}-u_{n_{k}}}{\lambda _{n_{k}}}\biggr\rangle \\ \geq&\langle x-k_{n_{k}},Ak_{n_{k}}-Au_{n_{k}} \rangle- \biggl\langle x-k_{n_{k}},\frac{k_{n_{k}}-u_{n_{k}}}{\lambda_{n_{k}}}\biggr\rangle . \end{aligned}$$
(3.45)

Since A is a continuous mapping, from (3.34) and (3.45) we deduce that

$$\langle x-z_{0},u\rangle\geq0, \quad \mbox{as } k\rightarrow\infty. $$

Therefore, from maximal monotonicity of Q, we obtain \(0\in Q(z_{0})\) and hence \(z_{0}\in \operatorname{VI}(A,K)\).

(c) Now we prove that \(z_{0}\in\bigcap_{i\in I}\operatorname{GEP}(G_{i},B_{i})\). For all \(i\in I\), by (3.36),

$$\begin{aligned} \|u_{n,i}-q\|^{2}&\leq\langle u_{n,i}-q ,x_{n}-q \rangle \\ &\leq\frac{1}{2} \bigl[\|u_{n,i}-q\|^{2}+ \|x_{n}-q\|^{2}-\|u_{n,i}-x_{n} \|^{2}\bigr] \end{aligned}$$

and then

$$ \|u_{n,i}-q\|^{2} \leq\|x_{n}-q\|^{2}- \|u_{n,i}-x_{n} \|^{2}. $$

This implies that

$$\begin{aligned} \|u_{n}-q\|^{2} \leq&\sum_{i\in I} \delta_{n,i}\|u_{n,i}-q\|^{2} \\ \leq&\|x_{n}-q \| ^{2}-\sum_{i\in I} \delta_{n,i} \|u_{n,i}-x_{n} \|^{2}. \end{aligned}$$

Therefore, for any \(i\in I\),

$$\begin{aligned} \Vert u_{n,i}-x_{n} \Vert ^{2}&\leq\sum _{i\in I} \delta_{n,i} \Vert u_{n,i}-x_{n} \Vert ^{2} \leq \Vert x_{n}-q\Vert ^{2}-\Vert u_{n}-q\Vert ^{2} \\ &\leq \Vert x_{n}-u_{n}\Vert \bigl(\Vert x_{n}-q\Vert +\Vert u_{n}-q\Vert \bigr). \end{aligned}$$

So by (3.38),

$$ \lim_{n\rightarrow\infty}\|u_{n,i}-x_{n} \|=0, \quad \forall i\in I. $$
(3.46)

Since \(\{u_{n,i}\}_{i\in I}\) is bounded, by (3.46), there exists a weakly convergent subsequence \(\{u_{n_{k},i}\}\) of \(\{u_{n,i}\}\) to \(z_{0}\). Now, we will show that, for any \(i\in I\), \(z_{0}\) is a member of \(\operatorname{GEP}(G_{i},B_{i})\). Since \(u_{n,i}=T_{r_{n,i}} (x_{n}-r_{n,i} B_{i} x_{n})\), for all \(y\in K\) we have

$$G_{i}(u_{n,i},y)+\langle B_{i}x_{n},y-u_{n,i} \rangle+ \frac{1}{r_{n,i}} \langle y-u_{n,i},u_{n,i}-x_{n} \rangle\geq0, \quad \forall i\in I. $$

From (A2) we obtain

$$\langle B_{i}x_{n},y-u_{n,i} \rangle+ \frac{1}{r_{n,i}} \langle y-u_{n,i},u_{n,i}-x_{n} \rangle\geq G_{i}(y,u_{n,i} ), \quad \forall y\in K, \forall i \in I. $$

Hence, for all \(y\in K\),

$$ \langle B_{i}x_{n_{k}},y-u_{n_{k},i} \rangle+ \biggl\langle y-u_{n_{k},i},\frac {u_{n_{k},i}-x_{n_{k}}}{r_{n_{k},i}} \biggr\rangle \geq G_{i}(y,u_{n_{k},i}),\quad \forall i\in I. $$
(3.47)

Let \(y_{t}=ty+(1-t)z_{0}\), where \(t\in(0,1]\) and \(y\in K\). Then \(y_{t}\in K\) and by (3.47),

$$\begin{aligned} \langle y_{t}-u_{n_{k},i} ,B_{i}y_{t} \rangle \geq&\langle y_{t}-u_{n_{k},i} ,B_{i}y_{t} \rangle- \langle y_{t}-u_{n_{k},i} ,B_{i}x_{n_{k}} \rangle \\ &{} - \biggl\langle y_{t}-u_{n_{k},i} ,\frac{u_{n_{k},i}-x_{n_{k}}}{r_{n_{k},i}} \biggr\rangle + G_{i}(y_{t},u_{n_{k},i} ) \\ =&\langle y_{t}-u_{n_{k},i} ,B_{i}y_{t}-B_{i}u_{n_{k},i} \rangle+ \langle y_{t}-u_{n_{k},i} ,B_{i}u_{n_{k},i}-B_{i}x_{n_{k}} \rangle \\ &{} - \biggl\langle y_{t}-u_{n_{k},i} ,\frac{u_{n_{k},i}-x_{n_{k}}}{r_{n_{k},i}} \biggr\rangle + G_{i}(y_{t},u_{n_{k},i} ),\quad \forall i\in I. \end{aligned}$$

But \(B_{i}\) is a \(\theta_{i}\)-inverse strongly monotone mapping and \(\| u_{n_{k},i}-x_{n_{k}} \|\rightarrow0 \), so \(\|B_{i}u_{n_{k},i}-B_{i}x_{n_{k}} \| \rightarrow0\) and \(\langle y_{t}-u_{n_{k},i} ,B_{i}y_{t}-B_{i}u_{n_{k},i} \rangle \geq0\), for all \(i\in I\). As \(k\rightarrow\infty\), the relations \(\frac {u_{n_{k},i}-x_{n_{k}}}{r_{n_{k},i}} \rightarrow0\), \(u_{n_{k},i}\rightharpoonup0\), and condition (A4) imply that

$$ \langle y_{t}-z_{0},B_{i}y_{t} \rangle\geq G_{i}(y_{t},z_{0}), \quad \forall i \in I. $$
(3.48)

From (A1), (A4), and (3.48) we have

$$\begin{aligned} 0&=G_{i}(y_{t},y_{t} )\leq tG_{i}(y_{t},y)+(1-t)G_{i}(y_{t},z_{0}) \\ &\leq t G_{i}(y_{t},y)+(1-t)\langle y_{t}-z_{0},B_{i}y_{t} \rangle \\ & =tG_{i}(y_{t},y)+(1-t)t\langle y-z_{0},B_{i}y_{t} \rangle \\ &\leq G_{i}(y_{t},y)+(1-t)\langle y-z_{0},B_{i}y_{t} \rangle, \quad \forall i\in I. \end{aligned}$$

Letting \(t\rightarrow0\), so for each \(y\in K\),

$$G_{i}(z_{0},y)+\langle y-z_{0},B_{i}z_{0} \rangle\geq0, \quad \forall i\in I. $$

That is, \(z_{0}\in \operatorname{GEP}(G_{i},B_{i})\), for all \(i\in I\). Now by parts (a), (b) and (c), \(z_{0}\in\Omega\). Therefore, from (3.42) we obtain

$$ \limsup_{n\rightarrow\infty}\bigl\langle v+ f(z)-z,y_{n}-z\bigr\rangle =\bigl\langle v+ f(z)-z,z_{0}-z \bigr\rangle \leq0. $$
(3.49)

Claim 5

The sequence \(\{x_{n}\}\) converges to z, where \(z=P_{\Omega}( v+ f(z))\).

From the convexity of \(\| \cdot \|^{2}\) and (2.1) we deduce that

$$\begin{aligned} \|x_{n+1}-z\|^{2} \leq&\beta_{n} \|x_{n}-z\|^{2}+(1-\beta_{n})\|St_{n}-z \|^{2} \\ \leq&\beta_{n}\|x_{n}-z\|^{2}+(1- \beta_{n})\|t_{n}-z\|^{2} \\ \leq&\beta_{n}\|x_{n}-z\|^{2}+(1- \beta_{n})\|y_{n}-z\|^{2} \\ \leq&\beta_{n}\|x_{n}-z\|^{2}+(1- \beta_{n})\bigl\| \alpha_{n}\bigl[ v_{n}+ f(k_{n})-z\bigr] \\ &{} +(1-\alpha_{n}) (k_{n}-z)\bigr\| ^{2} \\ \leq&\beta_{n}\|x_{n}-z\|^{2}+(1- \beta_{n}) (1-\alpha_{n})^{2}\|k_{n}-z \|^{2} \\ &{} +2\alpha_{n}(1-\beta_{n})\bigl\langle v_{n}+ f(k_{n})-z,y_{n}-z\bigr\rangle \\ \leq&\beta_{n}\|x_{n}-z\|^{2}+(1- \beta_{n}) (1-\alpha_{n})\|x_{n}-z \|^{2} \end{aligned}$$
(3.50)
$$\begin{aligned} &{} +2\alpha_{n}(1-\beta_{n})\bigl\langle v_{n}+ f(k_{n})-z,y_{n}-z\bigr\rangle \\ =&\bigl(1-\alpha_{n}(1-\beta_{n})\bigr) \|x_{n}-z\|^{2} + \gamma_{n}, \end{aligned}$$
(3.51)

where \(\gamma_{n}=2\alpha_{n}(1-\beta_{n})\langle v_{n}+ f(x_{n})-z,y_{n}-z\rangle \). On the other hand

$$\begin{aligned} \gamma_{n} =&2\alpha_{n}(1-\beta_{n})\bigl\langle v_{n}+ f(k_{n})-z,y_{n}-z\bigr\rangle \\ =&2\alpha_{n}(1-\beta_{n})\bigl\langle (v_{n}-v)+ \bigl(f(k_{n})-f(z)\bigr),y_{n}-z \bigr\rangle \\ &{} +2\alpha_{n}(1-\beta_{n})\bigl\langle v+ f(z)-z,y_{n}-z\bigr\rangle \\ \leq&2\alpha_{n}(1-\beta_{n})\bigl\{ \Vert v_{n}-v\Vert + \varepsilon \Vert k_{n}-z\Vert \bigr\} \Vert y_{n}-z\Vert \\ &{} +2\alpha_{n}(1-\beta_{n})\bigl\langle v+ f(z)-z,y_{n}-z\bigr\rangle \\ \leq&\alpha_{n}(1-\beta_{n}) \bigl(\Vert v_{n}-v\Vert ^{2}+\Vert y_{n}-z\Vert ^{2}\bigr) \\ &{} +\alpha_{n}(1-\beta_{n}) \varepsilon\bigl(\Vert k_{n}-z\Vert ^{2}+\Vert y_{n}-z\Vert ^{2}\bigr) \\ &{} +2\alpha_{n}(1-\beta_{n})\bigl\langle v+ f(z)-z,y_{n}-z\bigr\rangle . \end{aligned}$$

Suppose that \(M_{0}=\sup_{n\in N}\{\|y_{n}-z\|\}\). So

$$\begin{aligned} \gamma_{n} \leq&\alpha_{n}(1- \beta_{n}) \varepsilon\|x_{n}-z\|^{2}+\alpha _{n}(1-\beta_{n})\bigl[ M_{1}^{2}+ (1+ \varepsilon) M_{0}^{2}\bigr] \\ &{} +2\alpha_{n}(1-\beta_{n})\bigl\langle v+ f(z)-z,y_{n}-z\bigr\rangle . \end{aligned}$$
(3.52)

Substitute (3.52) in (3.50), then

$$\begin{aligned} \|x_{n+1}-z\|^{2} \leq&\bigl(1-\alpha_{n}(1- \beta_{n})\bigr)\|x_{n}-z\|^{2} + \alpha _{n}(1-\beta_{n})\varepsilon\|x_{n}-z \|^{2} \\ &{} +\alpha_{n}(1-\beta_{n})\bigl[ (1+\varepsilon) M_{1}^{2}+ M_{0}^{2}\bigr]+2\alpha _{n}(1-\beta_{n})\bigl\langle v+ f(z)-z,y_{n}-z \bigr\rangle \\ \leq&\bigl[1-\alpha_{n}(1-\beta_{n}) (1-\varepsilon) \bigr]\|x_{n}-z\|^{2}+\alpha_{n}(1-\beta _{n})M \\ &{} +2\alpha_{n}(1-\beta_{n})\bigl\langle v+ f(z)-z,y_{n}-z\bigr\rangle , \end{aligned}$$

where \(M= (1+\varepsilon)M_{1}^{2}+ M_{0}^{2}\). Therefore from (3.49) and Lemma 2.1, we conclude that \(\lim_{n\rightarrow\infty}\|x_{n}-z\|=0 \). Also from (3.34) and (3.38) we can see that \(y_{n}\rightarrow z\) and \(u_{n}\rightarrow z\). This completes the proof.  □

Let \(m=1\) in the index set I and take \(\delta_{n,1}=1\), so (3.5) becomes the following algorithm:

$$ \left \{ \textstyle\begin{array}{l} \Theta(u_{n},y)+\langle C u_{n}+ B x_{n},y-u_{n} \rangle+ \varphi (y)-\varphi(u_{n}) \\ \quad {}+ \frac{1}{r_{n}} \langle y-u_{n},u_{n}-x_{n} \rangle \geq0, \quad \forall y\in K, \\ y_{n}=\alpha_{n} v_{n}+(I-\alpha_{n}(I-f))P_{K} (u_{n}-\lambda_{n} Au_{n} ), \\ x_{n+1}= \beta_{n} x_{n}+(1-\beta_{n}) SP_{K} (y_{n}-\lambda_{n} Ay_{n}). \end{array}\displaystyle \right . $$
(3.53)

Put \(\varphi=0\), \(C=0\), and \(\{v_{n}\}=\{0\}\) in (3.53). If \(A=0\), then by the projection properties, \(k_{n}=P_{\Omega}u_{n}\). Since \(u_{n}\in C\), we have \(k_{n}=u_{n}\). So, we get the following corollary which is the so-called viscosity approximation method.

Corollary 3.3

Let \(\Theta:K\times K\rightarrow R\) be a bifunction satisfying (A1)-(A4) and B a θ-inverse strongly monotone. Let \(S:K\rightarrow K\) be a nonexpansive and \(f:K\rightarrow K\) be an ε-contraction mapping. Suppose that \(\Omega=\operatorname{GEP}(\Theta,B)\cap F(S)\) is nonempty. For any initial guess \(x_{1}\in K\), define the sequence \(\{x_{n} \}\) by

$$ \left \{ \textstyle\begin{array}{l} \Theta(u_{n},y)+\langle B x_{n},y-u_{n} \rangle+ \frac{1}{r_{n}} \langle y-u_{n},u_{n}-x_{n} \rangle\geq0,\quad \forall y\in K, \\ y_{n}=\alpha_{n} f(x_{n})+(1-\alpha_{n})u_{n}, \\ x_{n+1}= \beta_{n} x_{n}+(1-\beta_{n}) Sy_{n}, \end{array}\displaystyle \right . $$
(3.54)

where \(\{r_{n}\}\) is a positive real sequence, \(\{\alpha_{n}\}\) and \(\{ \beta_{n}\}\) are sequences in \((0,1)\) satisfying the following conditions:

  1. 1.

    \(\lim_{n\rightarrow\infty}\alpha_{n}=0\), \(\sum_{n=1}^{\infty}\alpha _{n}=\infty\);

  2. 2.

    \(0<\liminf_{n\rightarrow\infty}\beta_{n}\leq\limsup_{n\rightarrow \infty}\beta_{n}<1\);

  3. 3.

    for some \(\tau,\rho\in(0,2\theta)\), \(r_{n}\in[\tau,\rho]\) and \(\lim_{n\rightarrow\infty} (r_{n+1}-r_{n})=0\).

Then the sequence \(\{x_{n} \}\) converges strongly to \(z\in\Omega\), where \(z=P_{\Omega} fz\).

4 Numerical example

In this section, we present a numerical example which supports our algorithm.

Example 1

Suppose \(H=R\) and \(K= [-200,200]\). A system of generalized mixed equilibrium problem is to find a point \(x\in K\) such that, for each \(i \in I\),

$$ \Theta_{i}(x,y)+ \langle A_{i}x,y-x\rangle+ \varphi_{i}(y)-\varphi_{i}(x)\geq0,\quad \forall y\in K. $$
(4.1)

For any \(i\in I\), define \(\varphi_{i}=0\), \(\Theta_{i}(x,y)=(y+ix)(y-x)\) and \(A_{i}x=ix\). It is easy to see that, for each \(i\in I\), \(\Theta _{i}(x,y)\) satisfies the conditions (A1)-(A4) and \(A_{i}\) is \(\frac {1}{i+1}\)-inverse strongly monotone mapping. We know that, for each \(i\in I\), \(T_{r_{i}}\) is single valued. Thus for any \(y\in k\) and \(r_{i}>0\), we have

$$\begin{aligned} \begin{aligned} &\Theta_{i}(u_{i},y)+ \langle A_{i} x,y-u_{i} \rangle+ \frac{1}{r_{i}}\langle y-u_{i},u_{i}-x \rangle\geq0 \\ &\quad \Longleftrightarrow\quad \Theta_{i}(u_{i},y)+ \frac {1}{r_{i}}\bigl\langle y-u_{i},u_{i}-(I-A_{i})x \bigr\rangle \geq0 \\ &\quad \Longleftrightarrow\quad r_{i}y^{2}+\bigl[ \bigl(1+r_{i}(i-1)\bigr)u_{i}-(1-ir_{i})x\bigr]y +\bigl[(1-ir_{i})u_{i}x-(1+ir_{i})u_{i}^{2} \bigr]\geq0. \end{aligned} \end{aligned}$$

Let \(Q_{i}(y)=r_{i}y^{2}+[(1+r_{i}(i-1))u_{i}-(1-ir_{i})x]y+[(1-ir_{i})u_{i}x-(1+i r_{i} )u_{i}^{2}]\). Since \(Q_{i}\) is a quadratic function relative to y, \(Q_{i}(y)\geq0\) for all \(y\in K \), if and only if the coefficient of \(y^{2} \) is positive and the discriminant \(\Delta_{i} \leq0 \). But

$$\begin{aligned} \Delta_{i} =&\bigl[\bigl(1+r_{i}(i-1)\bigr)u_{i}-(1-ir_{i})x \bigr]^{2} \\ &{}-4r_{i}\bigl[(1-ir_{i})u_{i}x-(1+i r_{i} )u_{i}^{2}\bigr] \\ =&\bigl[ \bigl(1+r_{i}(i+1)\bigr)u_{i}-(1-ir_{i})x \bigr]^{2}, \end{aligned}$$

so we obtain

$$u_{i}=\frac{(1-ir_{i})}{1+r_{i}(i+1)}x $$

and then

$$T_{r_{i}}(x)=\frac{(1-ir_{i})}{1+r_{i}(i+1)}x. $$

From Lemma 2.5, we have \(F(T_{r_{i}} )=\operatorname{GEP}(\Theta, A_{i})={0}\). Define \(S:K \rightarrow K \) by \(S(x)=\sin(x) \). Then S is nonexpansive and \(F(\sin(x) )=\{0\}\). So, \(\Omega=\{0\}\). Assume that \(I=\{1,2\}\), \(A=0 \), \(\{v_{n}\}=\{0\}\), \(f(x)=\frac{x}{2} \), \(r_{n,i}=\frac {2n}{(n+1)(i+1)} \), \(\alpha_{n}=\frac{1}{n} \), \(\beta_{n}=\frac{1}{3}\) and \(\delta_{n,i}=\frac{1}{2}\), \(C_{i}=0\), \(i\in I\). Hence,

$$ \left \{ \textstyle\begin{array}{l} u_{n,1}=\frac{1}{3n+1} x_{n}, \\ u_{n,2}=\frac{-n+3}{9n+3} x_{n}, \\ y_{n}=\frac{-6n^{3}+37n^{2}-5n-6}{108n^{3}+72n^{2}+12n} x_{n}, \\ x_{n+1}= \frac{1}{3}x_{n}+\frac{2}{3}\sin(\frac {-6n^{3}+37n^{2}-5n-6}{108n^{3}+72n^{2}+12n} x_{n}). \end{array}\displaystyle \right . $$

Then, by Theorem 3.2, the sequence \(\{x_{n} \}\) converges strongly to \(0\in\Omega\). Table 1 and Figure 1 indicate the behavior of \(x_{n}\) for algorithm (3.5) with \(x_{0}=10\) and \(x_{0}=-10\). We have used MATLAB with \(\varepsilon=10^{-4}\).

Figure 1
figure 1

Convergence of the algorithm with initial values \(\pmb{x_{0}=10}\) and \(\pmb{x_{0}=-10}\) .

Table 1 The behavior of \(\pmb{x_{n}}\) with \(\pmb{x_{0}=10}\) and \(\pmb{x_{0}=-10}\)