1 Introduction

Let \(H_{1}\) and \(H_{2}\) be two real Hilbert spaces, whose inner product and norm are denoted by \(\langle \cdot, \cdot \rangle \) and \(\Vert \cdot \Vert \). And let \(C_{1}\) and \(C_{2}\) be two nonempty closed convex subsets of \(H_{1}\) and \(H_{2}\), respectively. Recall that the mapping \(T:C_{1} \to C_{1}\) is nonexpansive if \(\Vert Tx - Ty \Vert \le \Vert x - y \Vert \) for all \(x,y \in C_{1}\). We denote the fixed point set of T by \(\mathrm{Fix}(T) = \{ x \in C_{1}:x = Tx \} \). If T is nonexpansive, then \(\mathrm{Fix}(T)\) is nonempty, closed, and convex. Next, we consider the following three kinds of problems, which are paid attention to in our paper.

Problem 1

(Hierarchical fixed point problem (HFPP))

In 2006, Moudafi and Mainge [23] introduced and studied the following hierarchical fixed point problem (in short HFPP) for a nonexpansive mapping T with respect to another nonexpansive mapping S on \(C_{1}\): Find \(x \in \mathrm{Fix}(T)\) such that

$$\begin{aligned} \langle x - Sx,y - x \rangle \ge 0,\quad \forall y \in \mathrm{Fix}(T), \end{aligned}$$
(1)

which amounts to saying that \(x \in \mathrm{Fix}(T)\) satisfies the variational inequality depending on a given criterion S, namely, find \(x \in C_{1}\) such that

$$\begin{aligned} 0 \in (I - S)x + N_{\mathrm{Fix}(T)}(x), \end{aligned}$$

where I is the identity mapping on \(C_{1}\) and \(N_{\mathrm{Fix}(T)}\) is the normal cone to \(\mathrm{Fix}(T)\) at x defined by

$$\begin{aligned} N_{\mathrm{Fix}(T)}(x) = \textstyle\begin{cases} \{ u \in H_{1}: \langle y - x,u \rangle \le 0,\forall y \in \mathrm{Fix}(T) \}& \text{if }x \in \mathrm{Fix}(T), \\ \emptyset& \text{otherwise}. \end{cases}\displaystyle \end{aligned}$$

We know that the hierarchical fixed point problem links with some monotone variational inequalities and convex programming problems, see [39] and the references therein. In 2007, Moudafi [22] introduced the following Krasnoselski–Mann algorithm for solving HFPP (1):

$$\begin{aligned} x_{n + 1} = (1 - \alpha _{n})x_{n} + \alpha _{n}\bigl(\sigma _{n}Sx_{n} + (1 - \sigma _{n})Tx_{n}\bigr), \end{aligned}$$

where \(\{ \alpha _{n} \} \) and \(\{ \sigma _{n} \} \) are two real sequences in (0,1).

On the other hand, in 2011, Ceng, Anasri, and Yao [8] proposed the following iterative method:

$$\begin{aligned} x_{n + 1} = P_{C}\bigl[\alpha _{n}\rho U(x_{n}) + (I - \alpha _{n}\mu F) \bigl(T(y_{n}) \bigr)\bigr], \end{aligned}$$

where U is a Lipschitzian mapping, and F is a Lipschitzian and strongly monotone mapping. Under some approximate assumptions, they proved that the sequence \(\{ x_{n} \} \) generated by the above iterative algorithm converges strongly to the unique solution of the variational inequality

$$\begin{aligned} \bigl\langle \rho U(x) - \mu F(x),y - x \bigr\rangle \ge 0,\quad \forall y \in \mathrm{Fix}(T). \end{aligned}$$
(2)

Note that HFPP (2) is more general than HFPP (1).

Problem 2

(Split equilibrium problem (SEP))

Let H be a real Hilbert space and C be a nonempty closed convex subset of H. Let F be a bifunction of \(C \times C\) into R, where R is the set of real numbers. The equilibrium problem(in short, EP) for \(F:C \times C \to R\) is to find \(x \in C\) such that

$$\begin{aligned} F(x,y) \ge 0,\quad \forall y \in C, \end{aligned}$$
(3)

which was introduced and studied by Blum and Oettli [3]. It contains many problems, such as fixed point problem, variational inequality problem, Nash equilibrium problem, optimization problem, and complementarity problem as special cases, see, e.g., [1, 2, 20, 31] and the references therein. In 1997, Combettes and Hirstoaga [15] introduced an iterative scheme of finding the best approximation to the initial data when a set of solutions (3) is nonempty and proved a strong convergence theorem. We denote the solution set of EP (3) by \(EP(F) = \{ x \in C:F(x,y) \ge 0,\forall y \in C \} \).

Recently, Kazmi and Rizvi [21] considered the following split equilibrium problem (in short, SEP): Let \(F_{1}:C_{1} \times C_{1} \to R\) and \(F_{2}:C_{2} \times C_{2} \to R\) be two nonlinear bifunctions and \(A:H_{1} \to H_{2}\) be a bounded linear operator, then the SEP is to find \(x^{*} \in C_{1}\) such that

$$\begin{aligned} F_{1}\bigl(x^{*},x\bigr) \ge 0,\quad \forall x \in C_{1} \end{aligned}$$
(4)

and

$$\begin{aligned} F_{2}\bigl(y^{*},y\bigr) \ge 0,\quad \forall y \in C_{2}, \end{aligned}$$
(5)

where \(y^{*} = Ax^{*} \in C_{2}\). The solution set of SEP (4)–(5) is denoted by \(\Gamma = \{ p \in EP(F_{1}):Ap \in EP(F_{2}) \} \). This formalism is also the core of modeling of many inverse problems arising in phase retrieval and other real word problems, for example, in sensor networks in computerized tomography, in intensity-modulated radiation therapy treatment planning, and data compression, see, e.g., [5, 6, 1214] and the references therein.

Problem 3

(System of variational inequalities (SVI))

Let \(C_{1}\) be a nonempty closed convex subset of \(H_{1}\) and \(A,B:C_{1} \to H_{1}\) be two mappings. Ceng, Wang, and Yao [11] considered the following problem which finds \((x^{*},y^{*}) \in C_{1} \times C_{1}\) such that

$$\begin{aligned} \textstyle\begin{cases} \langle \lambda _{1}Ay^{*} + x^{*} - y^{*},x - x^{*} \rangle \ge 0,&\forall x \in C_{1}, \\ \langle \lambda _{2}Bx^{*} + y^{*} - x^{*},x - y^{*} \rangle \ge 0,&\forall x \in C_{1}. \end{cases}\displaystyle \end{aligned}$$
(6)

Problem (6) is called a general system of variational inequalities, where \(\lambda _{1} > 0\) and \(\lambda _{2} > 0\) are constants. In 2015, Jitsupa et al. [19] introduced the following system of variational inequalities in a Hilbert space \(H_{1}\), that is, finding \(x_{i}^{*} \in C_{1}(i = 1,2, \ldots,N)\) such that

$$\begin{aligned} \textstyle\begin{cases} \langle \lambda _{N}B_{N}x_{N}^{*} + x_{1}^{*} - x_{N}^{*},x - x_{1}^{*} \rangle \ge 0,&\forall x \in C_{1}, \\ \langle \lambda _{N - 1}B_{N - 1}x_{N - 1}^{*} + x_{N}^{*} - x_{N - 1}^{*},x - x_{N}^{*} \rangle \ge 0,&\forall x \in C_{1}, \\ \vdots \\ \langle \lambda _{2}B_{2}x_{2}^{*} + x_{3}^{*} - x_{2}^{*},x - x_{3}^{*} \rangle \ge 0,&\forall x \in C_{1}, \\ \langle \lambda _{1}B_{1}x_{1}^{*} + x_{2}^{*} - x_{1}^{*},x - x_{2}^{*} \rangle \ge 0,&\forall x \in C_{1}, \end{cases}\displaystyle \end{aligned}$$
(7)

which is called a more general system of variational inequalities, where \(\lambda _{i} > 0\) and \(B_{i}:C_{1} \to H_{1}\) is a nonlinear mapping for all \(i \in \{ 1,2, \ldots,N \} \). The solution set of SVI (7) is denoted by \(GSVI(C_{1},B_{i})\).

In view of these different three kinds of problems, there are some new research results on numerical algorithm in the recent literature. Under the setting of uniformly convex Banach spaces, in [2730], the Thakur three-step iterative process in the context of Suzuki-type nonexpansive mappings or generalized nonexpansive mappings enriched with property (E) was studied, and a comparative numerical experiment was performed with the visualization of some convergence behaviors. In [25], an S-iteration technique for finding common fixed points for nonself quasi-nonexpansive mappings was developed, and convergence properties of the proposed algorithm were analyzed. And in [17], a hybrid projection algorithm for a countable family of mappings was considered, and the strong convergence of the algorithm converging to the common fixed point of the mappings was given. Very recently, Dadashi and Postolache [18] constructed a forward–backward splitting algorithm for approximating a zero of the sum of an α-inverse strongly monotone operator and a maximal monotone operator. They proved the strong convergence theorem under mild conditions. Especially, they added a nonexpansive mapping in the algorithm and proved that the generated sequence converged strongly to a common element of the fixed point set of a nonexpansive mapping and the zero point set of the sum of monotone operators. They also applied their main result both to equilibrium problems and convex programming.

On the other hand, Ceng et al. [9] introduced a hybrid viscosity extragradient method for finding the common elements of the solution set of a general system of variational inequalities and the common fixed point set of a countable family of nonexpansive mappings and zero points of an accretive operator in real smooth Banach spaces. Moreover, they [10] proposed an implicit composite extragradient-like method based on the Mann iteration method, the viscosity approximation method, and the Korpelevich extragradient method for solving a general system of variational inequalities with a hierarchical variational inequality constraint for countably many uniformly Lipschitzian pseudocontractive mappings and an accretive operator in a real Banach space. In [36, 38], Yao, Postolache, and Yao suggested a projected type algorithm and an extragradient algorithm for finding the common solutions of two variational inequalities and the common element of the set of fixed points of a pseudocontractive operator and the set of solutions of the variational inequality problem in Hilbert spaces, respectively. In [35, 37], Yao et al. introduced iterative algorithms for solving a split variational inequality and a fixed point problem that requires finding a solution of a generalized variational inequality whose image is a fixed point of a pseudocontractive operator or a fixed point of two quasi-pseudocontractive operators under a nonlinear transformation in Hilbert spaces. In [33, 34], Yao et al. constructed iterative algorithms for solving the split feasibility problem and the fixed point problem, the split equilibrium problems and fixed point problems involved in the pseudocontractive mappings in Hilbert spaces and proved their strong convergence.

Inspired and motivated by the above research work, we suggest an iterative approximation method for finding an element of the common solution set of HFPP (2), SEP (4)–(5), and SVI (7) involved in nonexpansive mappings. To our best knowledge, there is no further study on finding the element of the common solution set of HFPP (2), SEP (4)–(5), and SVI (7). When the mappings take different types of cases, we can obtain a corollary on the common element of the set of fixed points of a nonexpansive mapping, the solution set of a variational inequality and an equilibrium problem. So, our results presented here are new and very interesting.

The paper is organized as follows. In Sect. 2, we recall some concepts and lemmas which are needed in proving our main results. In Sect. 3, we suggest an iterative algorithm for solving the three different kinds of problems and prove its strong convergence. At last, the conclusion is given.

2 Preliminaries

In this section, we list some fundamental results that are useful in the consequent analysis.

Let H be a real Hilbert space, C be a nonempty closed and convex subset of H.

Then, for all \(x,y \in H\), the following inequalities hold:

$$\begin{aligned} \Vert x - y \Vert ^{2} = \Vert x \Vert ^{2} - \Vert y \Vert ^{2} - 2 \langle x - y,y \rangle,\qquad \Vert x + y \Vert ^{2} \le \Vert x \Vert ^{2} + 2 \langle y,x + y \rangle. \end{aligned}$$

A function \(F:C \times C \to R\) is called an equilibrium function if it satisfies the following conditions:

  1. (A1)

    \(F(x,x) = 0\) for all \(x \in C\);

  2. (A2)

    F is monotone, i.e., \(F(x,y) + F(y,x) \le 0\) for all \(x,y \in C\);

  3. (A3)

    \(\mathop{\lim \sup}_{t \downarrow 0}F(tz + (1 - t)x,y) \le F(x,y)\) for all \(x,y,z \in C\);

  4. (A4)

    for each \(x \in C\), \(y \mapsto F(x,y)\) is convex and lower semi-continuous;

  5. (A5)

    Fix \(r > 0\) and \(z \in C\), there exists a nonempty compact convex subset K of H and \(x \in C \cap K\) such that

    $$\begin{aligned} F(y,x) + \frac{1}{r} \langle y - x,x - z \rangle < 0, \quad\forall y \in C \backslash K. \end{aligned}$$

Lemma 2.1

([16])

Assume that \(F:C \times C \to R\) is an equilibrium function. For \(r > 0\), define a mapping \(R_{r,F}:H \to C\) as follows:

$$\begin{aligned} R_{r,F}(x) = \biggl\{ z \in C:F(x,y) + \frac{1}{r} \langle y - z,z - x \rangle \ge 0,\forall y \in C \biggr\} \end{aligned}$$

for all \(x \in H\). Then the following hold:

  1. (B1)

    \(R_{r,F}\) is single-valued;

  2. (B2)

    Fix \((R_{r,F}) = EP(F)\) and \(EP(F)\) is a nonempty closed and convex subset of C;

  3. (B3)

    \(R_{r,F}\) is a firmly nonexpansive mapping, i.e.,

    $$\begin{aligned} \bigl\Vert R_{r,F}(x) - R_{r,F}(y) \bigr\Vert ^{2} \le \bigl\langle R_{r,F}(x) - R_{r,F}(y),x - y \bigr\rangle , \quad\forall x,y \in H.C. \end{aligned}$$

Lemma 2.2

Let \(F:C \times C \to R\) be an equilibrium function, and let \(R_{r,F}\) be defined as in Lemma 2.1for \(r > 0\). Let \(x,y \in H\) and \(r_{1},r_{2} > 0\), then

$$\begin{aligned} \bigl\Vert R_{r_{2},F}(y) - R_{r_{1},F}(x) \bigr\Vert \le \Vert y - x \Vert + \biggl\vert \frac{r_{2} - r_{1}}{r_{2}} \biggr\vert \bigl\Vert R_{r_{2},F}(y) - y \bigr\Vert . \end{aligned}$$

Lemma 2.3

([32])

Let \(\{ a_{n} \} \) be a sequence of nonnegative real numbers such that

$$\begin{aligned} a_{n + 1} \le ( 1 - \alpha _{n} )a_{n} + \delta _{n},\quad n \ge 0, \end{aligned}$$

where \(\{ \alpha _{n} \} \) is a sequence in \(( 0,1 )\) and \(\{ \delta _{n} \} \) is a sequence in R such that

$$\begin{aligned} (\mathrm{i})\quad \sum_{n = 1}^{\infty } \alpha _{n} = \infty;\qquad (\mathrm{ii})\quad \mathop{\lim \sup}_{n \to \infty } \frac{\delta _{n}}{\alpha _{n}} \le 0 \quad\textit{or}\quad \sum_{n = 1}^{\infty } \vert \delta _{n} \vert < \infty. \end{aligned}$$

Then \(\lim_{n \to \infty } a_{n} = 0\).

Lemma 2.4

Let \(P_{C}\) denote the projection of H onto C. It is known that \(P_{C}\) is nonexpansive and the following inequalities hold:

$$\begin{aligned} &\Vert P_{C}x - P_{C}y \Vert ^{2} \le \langle x - y,P_{C}x - P_{C}y \rangle,\quad \forall x,y \in H, \\ &\Vert x - y \Vert ^{2} \ge \Vert x - P_{C}x \Vert ^{2} + \Vert y - P_{C}x \Vert ^{2},\quad \forall x \in H,y \in C, \\ &\bigl\Vert (x - y) - (P_{C}x - P_{C}y) \bigr\Vert ^{2} \ge \Vert x - y \Vert ^{2} - \Vert P_{C}x - P_{C}y \Vert ^{2}, \quad\forall x,y \in H. \end{aligned}$$

Lemma 2.5

If B is an α-inverse-strongly monotone mapping of C into H, and \(\lambda \in [0,2\alpha ]\), then \(I - \lambda B\) is a nonexpansive mapping.

Proof

For any \(w,u \in C_{1}\), we have

$$\begin{aligned} \bigl\Vert (I - \lambda B)w - (I - \lambda B)u \bigr\Vert ^{2} &= \bigl\Vert (w - u) - \lambda (Bw - Bu) \bigr\Vert ^{2} \\ &= \Vert w - u \Vert ^{2} - 2\lambda \langle Bw - Bu,w - u \rangle + \lambda ^{2} \Vert Bw - Bu \Vert ^{2} \\ &\le \Vert w - u \Vert ^{2} + \lambda (\lambda - 2\alpha ) \Vert Bw - Bu \Vert ^{2} \\ &\le \Vert w - u \Vert ^{2}, \end{aligned}$$

which implies that \(I - \lambda B\) is nonexpansive, completing the proof. □

Lemma 2.6

([7])

Let C be a nonempty closed convex subset of a real Hilbert space H. Let \(B_{i}:C \to H\) be an \(\alpha _{i}\)-inverse-strongly monotone mapping, where \(i \in \{ 1,2, \ldots, N \} \). Let \(G:C \to C\) be a mapping defined by

$$\begin{aligned} G(x) = P_{C}(I - \lambda _{N}B_{N})P_{C}(I - \lambda _{N - 1}B_{N - 1}) \cdots P_{C}(I - \lambda _{2}B_{2})P_{C}(I - \lambda _{1}B_{1})x,\quad \forall x \in C. \end{aligned}$$

If \(\lambda _{i} \in [0,2\alpha _{i}]\), \(i = 1,2, \ldots,N\), then \(G:C \to C\) is nonexpansive.

Proof

Putting \(T^{i} = P_{C}(I - \lambda _{i}B_{i})P_{C}(I - \lambda _{i - 1}B_{i - 1}) \cdots P_{C}(I - \lambda _{2}B_{2})P_{C}(I - \lambda _{1}B_{1}), i = 1,2, \ldots,N\), and \(T^{0} = I\), where I is an identity mapping on C. Then \(G = T^{N}\). For all \(x,y \in C\), we have

$$\begin{aligned} \bigl\Vert G(x) - G(y) \bigr\Vert &= \bigl\Vert T^{N}(x) - T^{N}(y) \bigr\Vert \\ &= \bigl\Vert P_{C}(I - \lambda _{N}B_{N})T^{N - 1}x - P_{C}(I - \lambda _{N}B_{N})T^{N - 1}y \bigr\Vert \\ &\le \bigl\Vert (I - \lambda _{N}B_{N})T^{N - 1}x - (I - \lambda _{N}B_{N})T^{N - 1}y \bigr\Vert \\ &\le \bigl\Vert T^{N - 1}x - T^{N - 1}y \bigr\Vert \\ &\vdots \\ &\le \Vert x - y \Vert . \end{aligned}$$

Then G is nonexpansive, which completes the proof. □

Lemma 2.7

([8])

Let \(U:C \to H\) be a τ-Lipschitzian mapping, and let \(F:C \to H\) be a k-Lipschitzian mapping and η-strongly monotone mapping, then, for \(0 \le \rho \tau < \mu \eta \), \(\mu F - \rho U\) is \((\mu \eta - \rho \tau )\)-strongly monotone, i.e.,

$$\begin{aligned} \bigl\langle (\mu F - \rho U)x - (\mu F - \rho U)y,x - y \bigr\rangle \ge (\mu \eta - \rho \tau ) \Vert x - y \Vert ^{2},\quad \forall x,y \in C. \end{aligned}$$

Lemma 2.8

([26])

Suppose that \(\lambda \in (0,1)\) and \(\mu > 0\). Let \(F:C \to H\) be a k-Lipschitzian and η-strongly monotone mapping. In association with a nonexpansive mapping \(T:C \to C\), define the mapping \(T^{\lambda }:C \to H\) by

$$\begin{aligned} T^{\lambda } (x) = T(x) - \lambda \mu FT(x), \quad\forall x \in C. \end{aligned}$$

Then \(T^{\lambda }\) is a contractive mapping with \(\mu < \frac{2\eta }{k^{2}}\), that is,

$$\begin{aligned} \bigl\Vert T^{\lambda } x - T^{\lambda } y \bigr\Vert \le (1 - \lambda \nu ) \Vert x - y \Vert ,\quad \forall x,y \in C, \end{aligned}$$

where \(\nu = 1 - \sqrt{1 - \mu (2\eta - \mu k^{2})}\).

Lemma 2.9

([24])

Each Hilbert space H satisfies the Opial condition, that is, for any sequence \(\{ x_{n}\}\) with \(x_{n}\) converging weakly to x, the inequality \(\mathop{\lim \inf}_{n \to \infty } \Vert x_{n} - x \Vert < \mathop{\lim \inf}_{n \to \infty } \Vert x_{n} - y \Vert \) holds for every \(y \in H\) with \(y \ne x\).

Lemma 2.10

([4] Demiclosedness principle)

Let C be a closed convex subset of a real Hilbert space H, and let \(T:C \to C\) be a nonexpansive mapping. Then \(I - T\) is demiclosed at zero, that is, \(x_{n}\) converges weakly to \(x,x_{n} - Tx_{n} \to 0\) implies \(x = Tx\).

3 Main results

Theorem 3.1

For \(i \in \{ 1,2 \} \), let \(H_{i}\) be a real Hilbert space, \(C_{i}\) be a nonempty closed convex subset of \(H_{i}\), let \(F_{i}:C_{i} \times C_{i} \to R\) be an equilibrium function. Let \(A:H_{1} \to H_{2}\) be bounded linear operators with their adjoint operators \(A^{*}\). Let \(B_{i}\) be \(\xi _{i}\)-inverse-strongly monotone, respectively, where \(i \in \{ 1,2, \ldots, N \} \). Let \(F:C_{1} \to C_{1}\) be a k-Lipschitzian mapping and η-strongly monotone, and let \(U:C_{1} \to C_{1}\) be a τ-Lipschitzian mapping. Let \(S,T:C_{1} \to C_{1}\) be two nonexpansive mappings such that \(\Theta = \Gamma \cap \mathrm{Fix}(G) \cap \mathrm{Fix}(T) \ne \emptyset \). For a given \(x_{0} \in C_{1}\) arbitrarily, let the iterative sequences \(\{ u_{n} \} \), \(\{ y_{n} \} \), and \(\{ x_{n} \} \) be generated by

$$\begin{aligned} \textstyle\begin{cases} u_{n} = R_{r_{n},F_{1}}(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}), \\ y_{n} = P_{C_{1}}(I - \lambda _{N}B_{N})P_{C_{1}}(I - \lambda _{N - 1}B_{N - 1}) \cdots P_{C_{1}}(I - \lambda _{2}B_{2})P_{C_{1}}(I - \lambda _{1}B_{1})u_{n}, \\ z_{n} = \beta _{n}Sx_{n} + (1 - \beta _{n})y_{n}, \\ x_{n + 1} = P_{C_{1}}[\alpha _{n}\rho U(x_{n}) + (I - \alpha _{n}\mu F)(T(z_{n}))], \end{cases}\displaystyle \end{aligned}$$
(8)

where \(\{ r_{n} \} \subset (0,\infty ),\gamma \in (0,1 / L_{A}), L_{A}\) is the spectral radius of the operators \(A^{*}A\). Suppose that the parameters satisfy \(0 < \mu < \frac{2\eta }{k^{2}}\), \(k \ge \eta \), \(0 \le \rho \tau < \nu \), where \(\nu = 1 - \sqrt{1 - \mu (2\eta - \mu k)^{2}}\), and \(\{ \alpha _{n} \} \), \(\{ \beta _{n} \} \) are the sequences in \((0,1)\) satisfying the following conditions:

  1. (i)

    \(\lim_{n \to \infty } \alpha _{n} = 0\) and \(\sum_{n = 0}^{\infty } \alpha _{n} = \infty \), \(\sum_{n = 1}^{\infty } \vert \alpha _{n - 1} - \alpha _{n} \vert < \infty \);

  2. (ii)

    \(\mathop{\lim \sup}_{n \to \infty } \frac{\beta _{n}}{\alpha _{n}} = 0\), \(\beta _{n} \le \alpha _{n} ( n \ge 1 )\) and \(\sum_{n = 1}^{\infty } \vert \beta _{n - 1} - \beta _{n} \vert < \infty \);

  3. (iii)

    \(\mathop{\lim \inf}_{n \to \infty } r_{n} > 0\), \(\sum_{n = 1}^{\infty } \vert r_{n - 1} - r_{n} \vert < \infty\).

Then the sequence \(\{ x_{n} \} \) generated by (8) converges strongly to \(w \in \Theta \).

Proof

Let \(p \in \Theta \), i.e., \(p \in \Gamma \), that is, \(p = R_{r_{n},F_{1}}(p)\) and \(Ap = R_{r_{n},F_{2}}(Ap)\). For convenience, we split the proof into several steps.

Step 1. We show that \(\{ x_{n} \} \), \(\{ u_{n} \} \), \(\{ y_{n} \} \), \(\{ z_{n} \} \) are bounded.

First, by (8) and the expansiveness of \(R_{r_{n},F_{1}}\), we estimate

$$\begin{aligned} \begin{aligned}[b] \Vert u_{n} - p \Vert ^{2} &= \bigl\Vert R_{r_{n},F_{1}}\bigl(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}\bigr) - p \bigr\Vert ^{2} \\ &= \bigl\Vert R_{r_{n},F_{1}}\bigl(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}\bigr) - R_{r_{n},F_{1}}(p) \bigr\Vert ^{2} \\ &\le \bigl\Vert x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} - p \bigr\Vert ^{2} \\ &= \Vert x_{n} - p \Vert ^{2} + \gamma ^{2} \bigl\Vert A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} + 2\gamma \bigl\langle x_{n} - p,A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle . \end{aligned} \end{aligned}$$
(9)

It follows from the definition of \(L_{A}\) that

$$\begin{aligned} \begin{aligned}[b]&\gamma ^{2} \bigl\Vert A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \\ &\quad= \gamma ^{2} \bigl\langle (R_{r_{n},F_{2}} - I)Ax_{n},AA^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle \\ &\quad\le L_{A}\gamma ^{2} \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2}. \end{aligned} \end{aligned}$$
(10)

By using Lemma 2.4, we have

$$\begin{aligned} \begin{aligned}[b] &2\gamma \bigl\langle x_{n} - p,A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle \\ &\quad= 2\gamma \bigl\langle A(x_{n} - p),(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle \\ &\quad= 2\gamma \bigl\langle A(x_{n} - p) + (R_{r_{n},F_{2}} - I)Ax_{n} - (R_{r_{n},F_{2}} - I)Ax_{n},(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle \\ &\quad= 2\gamma \bigl\{ \bigl\langle R_{r_{n},F_{2}}Ax_{n} - Ap,(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle - \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \bigr\} \\ &\quad\le 2\gamma \biggl\{ \frac{1}{2} \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} - \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \biggr\} \\ &\quad= - \gamma \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2}. \end{aligned} \end{aligned}$$
(11)

From (9)–(11) and \(\gamma \in (0,1 / L_{A})\) it follows that

$$\begin{aligned} \Vert u_{n} - p \Vert ^{2} \le \Vert x_{n} - p \Vert ^{2} + \gamma (L_{A}\gamma - 1) \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \le \Vert x_{n} - p \Vert ^{2}. \end{aligned}$$
(12)

It follows from (8), (12), and Lemma 2.6 that we have

$$\begin{aligned} \Vert y_{n} - p \Vert = \bigl\Vert T^{N}u_{n} - T^{N}p \bigr\Vert \le \Vert u_{n} - p \Vert \le \Vert x_{n} - p \Vert . \end{aligned}$$
(13)

Next, we prove that the sequence \(\{ x_{n} \} \) is bounded. Note \(\beta _{n} \le \alpha _{n}\) for all \(n \ge 1\). Put \(V_{n} = \alpha _{n}\rho U(x_{n}) + (I - \alpha _{n}\mu F)(T(z_{n}))\),

from (8), we get

$$\begin{aligned} \Vert x_{n + 1} - p \Vert ={}& \bigl\Vert P_{C_{1}}\bigl[\alpha _{n}\rho U(x_{n}) + (I - \alpha _{n}\mu F) \bigl(T(z_{n})\bigr)\bigr] - p \bigr\Vert \\ \le{}& \alpha _{n} \bigl\Vert \rho U(x_{n}) - \mu F(p) \bigr\Vert + \bigl\Vert (I - \alpha _{n}\mu F) \bigl(T(z_{n})\bigr) - (I - \alpha _{n}\mu F) \bigl(T(p) \bigr) \bigr\Vert \\ ={}& \alpha _{n} \bigl\Vert \rho U(x_{n}) - \rho U(p) + ( \rho U - \mu F) (p) \bigr\Vert \\ &{}+ \bigl\Vert (I - \alpha _{n}\mu F) \bigl(T(z_{n})\bigr) - (I - \alpha _{n}\mu F) \bigl(T(p)\bigr) \bigr\Vert \\ \le{}& \alpha _{n}\rho \tau \Vert x_{n} - p \Vert + \alpha _{n} \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert + (1 - \alpha _{n}\nu ) \Vert z_{n} - p \Vert \\ \le{}& \alpha _{n}\rho \tau \Vert x_{n} - p \Vert + \alpha _{n} \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert \\ &{}+ (1 - \alpha _{n}\nu ) \bigl\Vert \beta _{n}Sx_{n} + (1 - \beta _{n})y_{n} - p \bigr\Vert \\ \le{}& \alpha _{n}\rho \tau \Vert x_{n} - p \Vert + \alpha _{n} \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert \\ &{}+ (1 - \alpha _{n}\nu ) \bigl(\beta _{n} \Vert Sx_{n} - Sp \Vert + \beta _{n} \Vert Sp - p \Vert + (1 - \beta _{n}) \Vert y_{n} - p \Vert \bigr) \\ \le{}& \alpha _{n}\rho \tau \Vert x_{n} - p \Vert + \alpha _{n} \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert \\ &{}+ (1 - \alpha _{n}\nu ) \bigl(\beta _{n} \Vert x_{n} - p \Vert + \beta _{n} \Vert Sp - p \Vert + (1 - \beta _{n}) \Vert x_{n} - p \Vert \bigr) \\ \le{}& \bigl(1 - \alpha _{n}(\nu - \rho \tau )\bigr) \Vert x_{n} - p \Vert + \alpha _{n} \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert \\ &{}+ (1 - \alpha _{n}\nu )\beta _{n} \Vert Sp - p \Vert \\ \le{}& \bigl(1 - \alpha _{n}(\nu - \rho \tau )\bigr) \Vert x_{n} - p \Vert + \alpha _{n} \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert + \beta _{n} \Vert Sp - p \Vert \\ \le{}& \bigl(1 - \alpha _{n}(\nu - \rho \tau )\bigr) \Vert x_{n} - p \Vert + \alpha _{n}\bigl( \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert + \Vert Sp - p \Vert \bigr) \\ \le{}& \bigl(1 - \alpha _{n}(\nu - \rho \tau )\bigr) \Vert x_{n} - p \Vert + \frac{\alpha _{n}(\nu - \rho \tau )}{\nu - \rho \tau } \bigl( \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert + \Vert Sp - p \Vert \bigr) \\ \le{}& \max \biggl\{ \Vert x_{0} - p \Vert ,\frac{1}{\nu - \rho \tau } \bigl( \bigl\Vert (\rho U - \mu F) (p) \bigr\Vert + \Vert Sp - p \Vert \bigr) \biggr\} . \end{aligned}$$
(14)

So \(\{ x_{n} \} \) is bounded, and consequently we can deduce that \(\{ u_{n} \} \), \(\{ y_{n} \}, \{ z_{n} \} \) are also bounded.

Step 2. We will show the following:

$$\begin{aligned} (\mathrm{a})\quad \lim_{n \to \infty } \Vert x_{n + 1} - x_{n} \Vert = 0;\qquad (\mathrm{b})\quad\lim_{n \to \infty } \Vert u_{n} - x_{n} \Vert = 0;\qquad (\mathrm{c})\quad\lim _{n \to \infty } \Vert u_{n} - y_{n} \Vert = 0. \end{aligned}$$

Noting \(u_{n} = R_{r_{n},F_{1}}(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n})\) and \(u_{n - 1} = R_{r_{n - 1},F_{1}}(x_{n - 1} + \gamma A^{*}(R_{r_{n - 1},F_{2}} - I)Ax_{n - 1})\), from Lemma 2.2, we have

$$\begin{aligned} &\Vert u_{n} - u_{n - 1} \Vert \\ &\quad = \Vert R_{r_{n},F_{1}}v_{n} - R_{r_{n - 1},F_{1}}v_{n - 1} \Vert \\ &\quad\le \bigl\Vert x_{n} - x_{n - 1} + \gamma A^{*} \bigl[(R_{r_{n},F_{2}} - I)Ax_{n} - (R_{r_{n - 1},F_{2}} - I)Ax_{n - 1})\bigr] \bigr\Vert \\ &\qquad{}+ \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \bigl\Vert R_{r_{n},F_{1}} \bigl(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}\bigr) - x_{n} - \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert \\ &\quad \le \bigl\Vert x_{n} - x_{n - 1} - \gamma A^{*}A(x_{n} - x_{n - 1}) \bigr\Vert + \gamma \bigl\Vert A^{*} \bigr\Vert \Vert R_{r_{n},F_{2}}Ax_{n} - R_{r_{n - 1},F_{2}}Ax_{n - 1} \Vert \\ &\qquad{} + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \delta _{n - 1} \\ &\quad\le \bigl\{ \Vert x_{n} - x_{n - 1} \Vert ^{2} - 2\gamma \Vert Ax_{n} - Ax_{n - 1} \Vert ^{2} + \gamma ^{2} \Vert A \Vert ^{4} \Vert x_{n} - x_{n - 1} \Vert ^{2} \bigr\} ^{\frac{1}{2}} \\ &\qquad{}+ \gamma \Vert A \Vert \biggl\{ \Vert Ax_{n} - Ax_{n - 1} \Vert + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \Vert R_{r_{n},F_{2}}Ax_{n} - Ax_{n} \Vert \biggr\} + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \delta _{n - 1} \\ &\quad\le \bigl(1 - 2\gamma \Vert A \Vert ^{2} + \gamma ^{2} \Vert A \Vert ^{4}\bigr)^{\frac{1}{2}} \Vert x_{n} - x_{n - 1} \Vert + \gamma \Vert A \Vert ^{2} \Vert x_{n} - x_{n - 1} \Vert \\ &\qquad{} + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \gamma \Vert A \Vert \sigma _{n - 1} + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \delta _{n - 1} \\ &\quad= \Vert x_{n} - x_{n - 1} \Vert + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \bigl(\gamma \Vert A \Vert \sigma _{n - 1} + \delta _{n - 1}\bigr), \end{aligned}$$
(15)

where

$$\begin{aligned} &\delta _{n - 1} = \bigl\Vert R_{r_{n},F_{1}}\bigl(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}\bigr) - \bigl(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}\bigr) \bigr\Vert , \\ &\sigma _{n - 1} = \Vert R_{r_{n},F_{2}}Ax_{n} - Ax_{n} \Vert . \end{aligned}$$

So, from Lemma 2.6, we have

$$\begin{aligned} \begin{aligned}[b] \Vert y_{n} - y_{n - 1} \Vert & = \bigl\Vert G(u_{n}) - G(u_{n - 1}) \bigr\Vert \le \Vert u_{n} - u_{n - 1} \Vert \\ &\le \Vert x_{n} - x_{n - 1} \Vert + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \bigl(\gamma \Vert A \Vert \sigma _{n - 1} + \delta _{n - 1}\bigr). \end{aligned} \end{aligned}$$
(16)

Then from (16) we get

$$\begin{aligned} \Vert z_{n} - z_{n - 1} \Vert ={}& \bigl\Vert \beta _{n}Sx_{n} + (1 - \beta _{n})y_{n} - \beta _{n - 1}Sx_{n - 1} - (1 - \beta _{n - 1})y_{n - 1} \bigr\Vert \\ \le{}& \beta _{n} \Vert x_{n} - x_{n - 1} \Vert + \vert \beta _{n} - \beta _{n - 1} \vert \bigl( \Vert Sx_{n - 1} \Vert + \Vert y_{n - 1} \Vert \bigr) + (1 - \beta _{n}) \Vert y_{n} - y_{n - 1} \Vert \\ \le{}& \beta _{n} \Vert x_{n} - x_{n - 1} \Vert + \vert \beta _{n} - \beta _{n - 1} \vert \bigl( \Vert Sx_{n - 1} \Vert + \Vert y_{n - 1} \Vert \bigr) \\ &{}+ (1 - \beta _{n}) \biggl\{ \Vert x_{n} - x_{n - 1} \Vert + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \bigl(\gamma \Vert A \Vert \sigma _{n - 1} + \delta _{n - 1} \bigr) \biggr\} \\ \le{}& \Vert x_{n} - x_{n - 1} \Vert + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \bigl(\gamma \Vert A \Vert \sigma _{n - 1} + \delta _{n - 1}\bigr) \\ &{}+ \vert \beta _{n} - \beta _{n - 1} \vert \bigl( \Vert Sx_{n - 1} \Vert + \Vert y_{n - 1} \Vert \bigr). \end{aligned}$$
(17)

Next, by Lemma 2.8, we estimate

$$\begin{aligned} \begin{aligned}[b] &\Vert x_{n + 1} - x_{n} \Vert \\ &\quad= \bigl\Vert P_{C}[V_{n}] - P_{C}[V_{n - 1}] \bigr\Vert \\ &\quad\le \bigl\Vert \alpha _{n}\rho \bigl(U(x_{n}) - U(x_{n - 1})\bigr) + (\alpha _{n} - \alpha _{n - 1})\rho U(x_{n - 1}) + (I - \alpha _{n}\mu F) \bigl(T(z_{n}) \bigr) \\ &\qquad{} - (I - \alpha _{n}\mu F) \bigl(T(z_{n - 1})\bigr) + (I - \alpha _{n}\mu F) \bigl(T(z_{n - 1})\bigr) - (I - \alpha _{n - 1}\mu F) \bigl(T(z_{n - 1})\bigr) \bigr\Vert \\ &\quad\le \alpha _{n}\rho \tau \Vert x_{n} - x_{n - 1} \Vert + \vert \alpha _{n} - \alpha _{n - 1} \vert \bigl( \bigl\Vert \rho U(x_{n - 1}) \bigr\Vert + \bigl\Vert \mu F \bigl(T(z_{n - 1})\bigr) \bigr\Vert \bigr)\\ &\qquad{} + (1 - \alpha _{n} \nu ) \Vert z_{n} - z_{n - 1} \Vert . \end{aligned} \end{aligned}$$
(18)

From (17) and (18), we get

$$\begin{aligned} & \Vert x_{n + 1} - x_{n} \Vert \\ &\quad \le \alpha _{n}\rho \tau \Vert x_{n} - x_{n - 1} \Vert + \vert \alpha _{n} - \alpha _{n - 1} \vert \bigl( \bigl\Vert \rho U(x_{n - 1}) \bigr\Vert + \bigl\Vert \mu F \bigl(T(z_{n - 1})\bigr) \bigr\Vert \bigr) \\ &\qquad{}+ (1 - \alpha _{n}\nu ) \biggl\{ \Vert x_{n} - x_{n - 1} \Vert + \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \bigl(\gamma \Vert A \Vert \sigma _{n - 1} + \delta _{n - 1} \bigr) \\ &\qquad{}+ \vert \beta _{n} - \beta _{n - 1} \vert \bigl( \Vert Sx_{n - 1} \Vert + \Vert z_{n - 1} \Vert \bigr) \biggr\} \\ &\quad\le \bigl(1 - (\nu - \rho \tau )\alpha _{n}\bigr) \Vert x_{n} - x_{n - 1} \Vert + \vert \alpha _{n} - \alpha _{n - 1} \vert \bigl( \bigl\Vert \rho U(x_{n - 1}) \bigr\Vert + \bigl\Vert \mu F\bigl(T(z_{n - 1})\bigr) \bigr\Vert \bigr) \\ &\qquad{}+ \biggl\vert 1 - \frac{r_{n - 1}}{r_{n}} \biggr\vert \bigl(\gamma \Vert A \Vert \sigma _{n - 1} + \delta _{n - 1}\bigr) + \vert \beta _{n} - \beta _{n - 1} \vert \bigl( \Vert Sx_{n - 1} \Vert + \Vert z_{n - 1} \Vert \bigr) \\ &\quad \le \bigl(1 - (\nu - \rho \tau )\alpha _{n}\bigr) \Vert x_{n} - x_{n - 1} \Vert + M\biggl( \vert \alpha _{n} - \alpha _{n - 1} \vert + \frac{1}{\varepsilon } \vert r_{n - 1} - r_{n} \vert + \vert \beta _{n} - \beta _{n - 1} \vert \biggr), \end{aligned}$$
(19)

where \(M = \max \{ \sup_{n \ge 1}( \Vert \rho U(x_{n - 1}) \Vert + \Vert \mu F(T(z_{n - 1})) \Vert ), \sup_{n \ge 1}(\gamma \Vert A \Vert \sigma _{n - 1} + \delta _{n - 1}), \sup_{n \ge 1}( \Vert Sx_{n - 1} \Vert + \Vert z_{n - 1} \Vert ) \} \). And ε is a real number such that \(0 < \varepsilon < r_{n}\). So, it follows from Conditions (i)–(iii) and Lemma 2.3 that

$$\begin{aligned} \lim_{n \to \infty } \Vert x_{n + 1} - x_{n} \Vert = 0. \end{aligned}$$
(20)

Next, we show that \(\lim_{n \to \infty } \Vert u_{n} - x_{n} \Vert = 0\). In view of (8), (9), (12), and (13), we obtain

$$\begin{aligned} \begin{aligned}[b] \Vert x_{n + 1} - p \Vert ^{2} ={}& \bigl\langle P_{C}[V_{n}] - p,x_{n + 1} - p \bigr\rangle \\ ={}& \bigl\langle P_{C}[V_{n}] - V_{n},P_{C}[V_{n}] - p \bigr\rangle + \langle V_{n} - p,x_{n + 1} - p \rangle \\ \le{}& \bigl\langle \alpha _{n}\bigl(\rho U(x_{n}) - \mu F(p) \bigr) + (I - \alpha _{n}\mu F) \bigl(T(z_{n})\bigr) \\ &{}- (I - \alpha _{n}\mu F) \bigl(T(p)\bigr),x_{n + 1} - p \bigr\rangle \\ ={}& \bigl\langle \alpha _{n}\rho \bigl(U(x_{n}) - U(p) \bigr),x_{n + 1} - p \bigr\rangle + \alpha _{n} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &{}+ \bigl\langle (I - \alpha _{n}\mu F) \bigl(T(z_{n})\bigr) - (I - \alpha _{n}\mu F) \bigl(T(p)\bigr),x_{n + 1} - p \bigr\rangle \\ \le{}& \alpha _{n}\rho \tau \Vert x_{n} - p \Vert \Vert x_{n + 1} - p \Vert + \alpha _{n} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &{}+ (1 - \alpha _{n}\nu ) \Vert z_{n} - p \Vert \Vert x_{n + 1} - p \Vert \\ \le{}& \frac{\alpha _{n}\rho \tau }{2}\bigl( \Vert x_{n} - p \Vert ^{2} - \Vert x_{n + 1} - p \Vert ^{2}\bigr) + \alpha _{n} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &{}+ \frac{(1 - \alpha _{n}\nu )}{2}\bigl( \Vert z_{n} - p \Vert ^{2} - \Vert x_{n + 1} - p \Vert ^{2}\bigr) \\ \le{}& \frac{(1 - \alpha _{n}(\nu - \rho \tau ))}{2} \Vert x_{n + 1} - p \Vert ^{2} + \alpha _{n} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &{}+ \frac{\alpha _{n}\rho \tau }{2} \Vert x_{n} - p \Vert ^{2} + \frac{(1 - \alpha _{n}\nu )}{2} \Vert z_{n} - p \Vert ^{2} \\ \le{}& \frac{(1 - \alpha _{n}(\nu - \rho \tau ))}{2} \Vert x_{n + 1} - p \Vert ^{2} + \alpha _{n} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &{}+ \frac{\alpha _{n}\rho \tau }{2} \Vert x_{n} - p \Vert ^{2} + \frac{(1 - \alpha _{n}\nu )}{2}\bigl(\beta _{n} \Vert Sx_{n} - p \Vert ^{2} + (1 - \beta _{n}) \Vert y_{n} - p \Vert ^{2}\bigr). \end{aligned} \end{aligned}$$
(21)

From the above inequality and (12), (13), we get

$$\begin{aligned} \Vert x_{n + 1} - p \Vert ^{2} \le{}& \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} \\ &{}+ \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\{ \Vert x_{n} - p \Vert ^{2} + \gamma (L_{A}\gamma - 1) \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \bigr\} \\ \le{}& \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} \\ &{}+ \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\{ \gamma (L_{A}\gamma - 1) \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \bigr\} , \end{aligned}$$
(22)

which means that

$$\begin{aligned} \begin{aligned}[b]& \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\{ \gamma (1 - L_{A}\gamma ) \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \bigr\} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} - \Vert x_{n + 1} - p \Vert ^{2} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \beta _{n} \Vert Sx_{n} - p \Vert ^{2} + \bigl( \Vert x_{n} - p \Vert + \Vert x_{n + 1} - p \Vert \bigr) \Vert x_{n + 1} - x_{n} \Vert . \end{aligned} \end{aligned}$$
(23)

Since \(\alpha _{n} \to 0\), \(\beta _{n} \to 0\) and \(\lim_{n \to \infty } \Vert x_{n + 1} - x_{n} \Vert = 0\), we obtain

$$\lim_{n \to \infty } \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert = 0. $$

And since \(R_{r_{n},F_{1}}\) is firmly nonexpansive, from (8) we get

$$\begin{aligned} &\Vert u_{n} - p \Vert ^{2} \\ &\quad= \bigl\Vert R_{r_{n},F_{1}}\bigl(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}\bigr) - p \bigr\Vert ^{2} \\ &\quad= \bigl\Vert R_{r_{n},F_{1}}\bigl(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}\bigr) - R_{r_{n},F_{1}}(p) \bigr\Vert ^{2} \\ &\quad\le \bigl\langle u_{n} - p,x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} - p \bigr\rangle \\ &\quad= \frac{1}{2} \bigl\{ \Vert u_{n} - p \Vert ^{2} + \bigl\Vert x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} - p \bigr\Vert ^{2} \\ &\qquad{} - \bigl\Vert u_{n} - p - \bigl[ x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} - p \bigr] \bigr\Vert ^{2} \bigr\} \\ &\quad= \frac{1}{2} \bigl\{ \Vert u_{n} - p \Vert ^{2} + \bigl\Vert x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} - p \bigr\Vert ^{2} \\ &\qquad{}- \bigl\Vert u_{n} - x_{n} - \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \bigr\} \\ &\quad= \frac{1}{2} \bigl\{ \Vert u_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} + 2\gamma \bigl\langle x_{n} - p,A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle \\ &\qquad{} + \gamma ^{2} \bigl\Vert A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2} \\ &\qquad{}- \bigl[ \Vert u_{n} - x_{n} \Vert ^{2} - 2\gamma \bigl\langle u_{n} - x_{n},A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle + \gamma ^{2} \bigl\Vert A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert ^{2}\bigr] \bigr\} \\ &\quad= \frac{1}{2} \bigl\{ \Vert u_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} + 2\gamma \bigl\langle u_{n} - p,A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle - \Vert u_{n} - x_{n} \Vert ^{2} \bigr\} , \end{aligned}$$
(24)

which implies that

$$\begin{aligned} \Vert u_{n} - p \Vert ^{2} \le \Vert x_{n} - p \Vert ^{2} - \Vert u_{n} - x_{n} \Vert ^{2} + 2\gamma \bigl\Vert A(u_{n} - p) \bigr\Vert \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert . \end{aligned}$$
(25)

So, from (21) and (25) we have

$$\begin{aligned} & \Vert x_{n + 1} - p \Vert ^{2} \\ &\quad\le \frac{(1 - \alpha _{n}(\nu - \rho \tau ))}{2} \Vert x_{n + 1} - p \Vert ^{2} + \alpha _{n} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{\alpha _{n}\rho \tau }{2} \Vert x_{n} - p \Vert ^{2} + \frac{(1 - \alpha _{n}\nu )}{2}\bigl(\beta _{n} \Vert Sx_{n} - p \Vert ^{2} + (1 - \beta _{n}) \Vert u_{n} - p \Vert ^{2}\bigr) \\ &\quad\le \frac{(1 - \alpha _{n}(\nu - \rho \tau ))}{2} \Vert x_{n + 1} - p \Vert ^{2} + \alpha _{n} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle + \frac{\alpha _{n}\rho \tau }{2} \Vert x_{n} - p \Vert ^{2} \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )}{2} \bigl\{ \beta _{n} \Vert Sx_{n} - p \Vert ^{2} + (1 - \beta _{n}) \bigl( \Vert x_{n} - p \Vert ^{2} - \Vert u_{n} - x_{n} \Vert ^{2} \\ &\qquad{}+ 2\gamma \bigl\Vert A(u_{n} - p) \bigr\Vert \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert \bigr) \bigr\} , \end{aligned}$$
(26)

which implies that

$$\begin{aligned} \begin{aligned}[b]& \Vert x_{n + 1} - p \Vert ^{2} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\{ \Vert x_{n} - p \Vert ^{2} - \Vert u_{n} - x_{n} \Vert ^{2} \\ &\qquad{} + 2\gamma \bigl\Vert A(u_{n} - p) \bigr\Vert \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert \bigr\} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\{ - \Vert u_{n} - x_{n} \Vert ^{2} + 2\gamma \bigl\Vert A(u_{n} - p) \bigr\Vert \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert \bigr\} . \end{aligned} \end{aligned}$$
(27)

Hence

$$\begin{aligned} &\frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert u_{n} - x_{n} \Vert ^{2} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} - \Vert x_{n + 1} - p \Vert ^{2} \\ &\qquad{}+ \frac{2(1 - \alpha _{n}\nu )(1 - \beta _{n})\gamma }{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\Vert A(u_{n} - p) \bigr\Vert \bigl\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \bigr\Vert \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \bigl( \Vert x_{n} - p \Vert + \Vert x_{n + 1} - p \Vert \bigr) \Vert x_{n + 1} - x_{n} \Vert . \end{aligned}$$
(28)

Since \(\lim_{n \to \infty } \alpha _{n} = 0,\lim_{n \to \infty } \beta _{n} = 0,\lim_{n \to \infty } \Vert x_{n + 1} - x_{n} \Vert = 0,\text{and} \lim_{n \to \infty } \Vert (R_{r_{n},F_{2}} - I)Ax_{n} \Vert = 0\), we have

$$\begin{aligned} \lim_{n \to \infty } \Vert u_{n} - x_{n} \Vert = 0. \end{aligned}$$

Then, by Lemma 2.5 and Lemma 2.6, we obtain

$$\begin{aligned} \begin{aligned}[b]& \bigl\Vert T^{N}u_{n} - T^{N}p \bigr\Vert ^{2}\\ &\quad = \bigl\Vert P_{C_{1}}(I - \lambda _{N}B_{N})T^{N - 1}u_{n} - P_{C_{1}}(I - \lambda _{N}B_{N})T^{N - 1}p \bigr\Vert ^{2} \\ &\quad\le \bigl\Vert (I - \lambda _{N}B_{N})T^{N - 1}u_{n} - (I - \lambda _{N}B_{N})T^{N - 1}p \bigr\Vert ^{2} \\ &\quad\le \bigl\Vert T^{N - 1}u_{n} - T^{N - 1}p \bigr\Vert ^{2} + \lambda _{N}(\lambda _{N} - 2\xi _{N}) \bigl\Vert B_{N}T^{N - 1}u_{n} - B_{N}T^{N - 1}p \bigr\Vert ^{2} \\ &\quad\le \Vert u_{n} - p \Vert ^{2} + \sum _{i = 1}^{N} \lambda _{i} (\lambda _{i} - 2\xi _{i}) \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert ^{2} \\ &\quad\le \Vert x_{n} - p \Vert ^{2} + \sum _{i = 1}^{N} \lambda _{i} (\lambda _{i} - 2\xi {}_{i}) \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert ^{2}. \end{aligned} \end{aligned}$$
(29)

From (21), we obtain

$$\begin{aligned} \begin{aligned}[b]& \Vert x_{n + 1} - p \Vert ^{2} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Biggl\{ \Vert x_{n} - p \Vert ^{2} + \sum_{i = 1}^{N} \lambda _{i} (\lambda _{i} - 2\xi _{i}) \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert ^{2} \Biggr\} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} \\ &\qquad{}+ \Vert x_{n} - p \Vert ^{2} + \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Biggl\{ \sum_{i = 1}^{N} \lambda _{i} (\lambda _{i} - 2\xi {}_{i}) \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert ^{2} \Biggr\} , \end{aligned} \end{aligned}$$
(30)

which implies that

$$\begin{aligned} & \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Biggl\{ \sum _{i = 1}^{N} \lambda _{i}(2\xi _{i} - \lambda _{i}) \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert ^{2} \Biggr\} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} - \Vert x_{n + 1} - p \Vert ^{2} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} \\ &\qquad{}+ \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \bigl( \Vert x_{n} - p \Vert + \Vert x_{n + 1} - p \Vert \bigr) \Vert x_{n} - x_{n + 1} \Vert . \end{aligned}$$
(31)

Since \(\lim_{n \to \infty } \alpha _{n} = 0,\lim_{n \to \infty } \beta _{n} = 0\) and \(\lim_{n \to \infty } \Vert x_{n + 1} - x_{n} \Vert = 0\), we have

$$\begin{aligned} \lim_{n \to \infty } \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert = 0. \end{aligned}$$

By Lemma 2.4, we obtain

$$\begin{aligned} \begin{aligned}[b] &\Vert y_{n} - p \Vert ^{2} \\ &\quad= \bigl\Vert T^{N}u_{n} - T^{N}p \bigr\Vert ^{2} \\ &\quad= \bigl\Vert P_{C}(I - \lambda _{N}B_{N})T^{N - 1}u_{n} - P_{C}(I - \lambda _{N}B_{N})T^{N - 1}p \bigr\Vert ^{2} \\ &\quad\le \bigl\langle (I - \lambda _{N}B_{N})T^{N - 1}u_{n} - (I - \lambda _{N}B_{N})T^{N - 1}p,T^{N}u_{n} - T^{N}p \bigr\rangle \\ &\quad=\frac{1}{2}\bigl( \Vert y_{n} - p \Vert ^{2} + \bigl\Vert (I - \lambda _{N}B_{N})T^{N - 1}u_{n} - (I - \lambda _{N}B_{N})T^{N - 1}p \bigr\Vert ^{2} \\ &\qquad{}- \bigl\Vert (I - \lambda _{N}B_{N})T^{N - 1}u_{n} - (I - \lambda _{N}B_{N})T^{N - 1}p - \bigl(T^{N}u_{n} - T^{N}p\bigr) \bigr\Vert ^{2}\bigr) \\ &\quad\le \frac{1}{2}\bigl( \Vert y_{n} - p \Vert ^{2} + \bigl\Vert T^{N - 1}u_{n} - T^{N - 1}p \bigr\Vert ^{2} \\ &\qquad{}- \bigl\Vert T^{N - 1}u_{n} - T^{N}u_{n} + T^{N}p - T^{N - 1}p - \lambda _{N} \bigl(B_{N}T^{N - 1}u_{n} - B_{N}T^{N - 1}p \bigr) \bigr\Vert ^{2}\bigr), \end{aligned} \end{aligned}$$
(32)

which implies

$$\begin{aligned} & \Vert y_{n} - p \Vert ^{2} \\ &\quad\le \bigl\Vert T^{N - 1}u_{n} - T^{N - 1}p \bigr\Vert ^{2} \\ &\qquad{}- \bigl\Vert T^{N - 1}u_{n} - T^{N}u_{n} + T^{N}p - T^{N - 1}p - \lambda _{N} \bigl(B_{N}T^{N - 1}u_{n} - B_{N}T^{N - 1}p \bigr) \bigr\Vert ^{2} \\ &\quad= \bigl\Vert T^{N - 1}u_{n} - T^{N - 1}p \bigr\Vert ^{2} - \bigl\Vert T^{N - 1}u_{n} - T^{N}u_{n} + T^{N}p - T^{N - 1}p \bigr\Vert ^{2} \\ &\qquad{}- \lambda _{N}^{2} \bigl\Vert B_{N}T^{N - 1}u_{n} - B_{N}T^{N - 1}p \bigr\Vert ^{2} \\ &\qquad{}+ 2\lambda _{N} \bigl\langle T^{N - 1}u_{n} - T^{N}u_{n} + T^{N}p - T^{N - 1}p,B_{N}T^{N - 1}u_{n} - B_{N}T^{N - 1}p \bigr\rangle \\ &\quad\le \bigl\Vert T^{N - 1}u_{n} - T^{N - 1}p \bigr\Vert ^{2} - \bigl\Vert T^{N - 1}u_{n} - T^{N}u_{n} + T^{N}p - T^{N - 1}p \bigr\Vert ^{2} \\ &\qquad{}+ 2\lambda _{N} \bigl\Vert T^{N - 1}u_{n} - T^{N}u_{n} + T^{N}p - T^{N - 1}p \bigr\Vert \bigl\Vert B_{N}T^{N - 1}u_{n} - B_{N}T^{N - 1}p \bigr\Vert . \end{aligned}$$
(33)

By induction and (12), we have

$$\begin{aligned} \begin{aligned}[b] \Vert y_{n} - p \Vert ^{2} \le{}& \Vert x_{n} - p \Vert ^{2} - \sum _{i = 1}^{N} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert ^{2} \\ &{}+ \sum_{i = 1}^{N} 2\lambda _{i} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert . \end{aligned} \end{aligned}$$
(34)

It follows from (21) and (34) that we have

$$\begin{aligned} \begin{aligned}[b] &\Vert x_{n + 1} - p \Vert ^{2} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Biggl\{ \Vert x_{n} - p \Vert ^{2} \\ &\qquad{} - \sum_{i = 1}^{N} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert ^{2} \\ &\qquad{}+ \sum _{i = 1}^{N} 2\lambda _{i} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert \Biggr\} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Biggl\{ - \sum_{i = 1}^{N} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert ^{2} \\ &\qquad{} + \sum_{i = 1}^{N} 2\lambda _{i} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert \bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert \Biggr\} , \end{aligned} \end{aligned}$$
(35)

which implies

$$\begin{aligned} & \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Biggl\{ \sum _{i = 1}^{N} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert ^{2} \Biggr\} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \Vert x_{n} - p \Vert ^{2} - \Vert x_{n + 1} - p \Vert ^{2} \\ &\quad\le \frac{\alpha _{n}\rho \tau }{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - p \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(p) - \mu F(p),x_{n + 1} - p \bigr\rangle \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sx_{n} - p \Vert ^{2} + \bigl( \Vert x_{n} - p \Vert + \Vert x_{n + 1} - p \Vert \bigr) \Vert x_{n} - x_{n + 1} \Vert \\ &\qquad{}+ \frac{(1 - \alpha _{n}\nu )(1 - \beta _{n})}{1 + \alpha _{n}(\nu - \rho \tau )} \Biggl\{ \sum_{i = 1}^{N} 2\lambda _{i} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert \\ &\qquad{}\times\bigl\Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \bigr\Vert \Biggr\} . \end{aligned}$$
(36)

Since \(\lim_{n \to \infty } \alpha _{n} = 0\), \(\lim_{n \to \infty } \beta _{n} = 0\) and \(\lim_{n \to \infty } \Vert B_{i}T^{i - 1}u_{n} - B_{i}T^{i - 1}p \Vert ^{2} = 0\), we have

$$\begin{aligned} \lim_{n \to \infty } \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert = 0. \end{aligned}$$
(37)

From (37), we obtain

$$\begin{aligned} \Vert u_{n} - y_{n} \Vert = \bigl\Vert T^{0}u_{n} - T^{N}u_{n} \bigr\Vert \le \sum_{i = 1}^{N} \bigl\Vert T^{i - 1}u_{n} - T^{i}u_{n} + T^{i}p - T^{i - 1}p \bigr\Vert , \end{aligned}$$
(38)

which means \(\lim_{n \to \infty } \Vert u_{n} - y_{n} \Vert = 0\). Note \(\lim_{n \to \infty } \Vert u_{n} - x_{n} \Vert = 0,\lim_{n \to \infty } \Vert u_{n} - y_{n} \Vert = 0\), then we have \(\lim_{n \to \infty } \Vert x_{n} - y_{n} \Vert = 0\). Since \(T(x_{n}) \in C_{1}\), we have

$$\begin{aligned} \bigl\Vert x_{n} - T(x_{n}) \bigr\Vert \le {}&\Vert x_{n} - x_{n + 1} \Vert + \bigl\Vert x_{n + 1} - T(x_{n}) \bigr\Vert \\ ={}& \Vert x_{n} - x_{n + 1} \Vert + \bigl\Vert P_{C_{1}}[V_{n}] - P_{C_{1}}\bigl[T(x_{n}) \bigr] \bigr\Vert \\ \le {}&\Vert x_{n} - x_{n + 1} \Vert + \bigl\Vert \alpha _{n}(\rho U(x_{n}) - \mu F\bigl(T(y_{n})\bigr) + T(y_{n}) - T(x_{n}) \bigr\Vert \\ \le{}& \Vert x_{n} - x_{n + 1} \Vert + \alpha _{n} \bigl\Vert \rho U(x_{n}) - \mu F(T(y_{n}) \bigr\Vert + \Vert y_{n} - x_{n} \Vert \\ \le{}& \Vert x_{n} - x_{n + 1} \Vert + \alpha _{n} \bigl\Vert \rho U(x_{n}) - \mu F(T(y_{n}) \bigr\Vert + \bigl\Vert \beta _{n}Sx_{n} + (1 - \beta _{n})y_{n} - x_{n} \bigr\Vert \\ \le{}& \Vert x_{n} - x_{n + 1} \Vert + \alpha _{n} \bigl\Vert \rho U(x_{n}) - \mu F(T(y_{n}) \bigr\Vert \\ &{} + \beta _{n} \Vert Sx_{n} - x_{n} \Vert + (1 - \beta _{n}) \Vert y_{n} - x_{n} \Vert . \end{aligned}$$

Noting that \(\lim_{n \to \infty } \alpha _{n} = 0\), \(\lim_{n \to \infty } \beta _{n} = 0\), \(\lim_{n \to \infty } \Vert x_{n} - y_{n} \Vert = 0\),and \(\lim_{n \to \infty } \Vert x_{n + 1} - x_{n} \Vert = 0\), we have \(\lim_{n \to \infty } \Vert x_{n} - T(x_{n}) \Vert = 0\).

Step 3. We show that \(z \in F(T)\). Assume that \(z \notin F(T)\). Since \(x_{n_{i}}\) converges weakly to z and \(Tz \ne z\), by Lemma 2.9, we have

$$\begin{aligned} &\mathop{\mathop{\lim \inf}}_{n \to \infty } \Vert x_{n_{i}} - z \Vert \\ &\quad < \mathop{\lim \inf}_{n \to \infty } \Vert x_{n_{i}} - Tz \Vert \le \mathop{\lim \inf}_{n \to \infty } \bigl( \Vert x_{n_{i}} - Tx_{n_{i}} \Vert + \Vert Tx_{n_{i}} - Tz \Vert \bigr) \le \mathop{\lim \inf}_{n \to \infty } \Vert x_{n_{i}} - z \Vert , \end{aligned}$$

which is a contradiction. Thus, we obtain \(z \in F(T)\). To prove the convergence of the sequence \(\{ x_{n} \} \), we need to prove the following conclusion, that is, the sequence \(\{ x_{n} \} \) generated by (8) converges strongly to w, which is the unique solution of the variational inequality

$$\begin{aligned} \bigl\langle \rho U(w) - \mu F(w),x - w \bigr\rangle \le 0,\quad \forall x \in \Theta. \end{aligned}$$

In fact, noting that \(u_{n} = R_{r_{n,F_{1}}}(x_{n} + \gamma A^{*}(R_{r_{n,F_{2}}} - I)Ax_{n}\) and

$$\begin{aligned} F_{1}(u_{n},y) + \frac{1}{r_{n}} \langle y - u_{n},u_{n} - x_{n} \rangle - \frac{1}{r_{n}} \bigl\langle y - u_{n},\gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle \ge 0,\quad\forall y \in C_{1}. \end{aligned}$$

From the monotonicity of \(F_{1}\), we have

$$\begin{aligned} - \frac{1}{r_{n}} \bigl\langle y - u_{n},\gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n} \bigr\rangle + \frac{1}{r_{n}} \langle y - u_{n},u_{n} - x_{n} \rangle \ge F_{1}(y,u_{n}),\quad \forall y \in C_{1}, \end{aligned}$$

and

$$\begin{aligned} - \frac{1}{r_{n_{i}}} \bigl\langle y - u_{n_{i}},\gamma A^{*}(R_{r_{n_{i}},F_{2}} - I)Ax_{n_{i}} \bigr\rangle + \biggl\langle y - u_{n_{i}},\frac{u_{n_{i}} - x_{n_{i}}}{r_{n_{i}}} \biggr\rangle \ge F_{1}(y,u_{n_{i}}),\quad\forall y \in C_{1}. \end{aligned}$$

Since \(\Vert u_{n} - x_{n} \Vert \to 0\), \(\Vert (R_{r_{n},F_{2}} - I)Ax_{n} \Vert \to 0\), we get \(\{ u_{n_{i}} \} \) converges weakly to z. By (A4), we know \(F_{1}(y,z) \le 0\), \(\forall y \in C_{1}\). Let \(y_{t} = ty + (1 - t)z\), \(t \in (0,1]\), it follows from \(y \in C_{1}\), \(z \in C_{1}\) and the convexity of \(C_{1}\) that \(F_{1}(y_{t},z) \le 0\). So, from (A1), (A3), and (A4), we have

$$\begin{aligned} 0 = F_{1}(y_{t},y_{t}) \le tF_{1}(y_{t},y) + (1 - t)F_{1}(y_{t},z) \le F_{1}(y_{t},y). \end{aligned}$$

Therefore \(F_{1}(z,y) \ge 0, \forall y \in C_{1}\). This is \(z \in EP(F_{1})\).

Next we show that \(Az \in EP(F_{2})\), since \(\Vert u_{n} - x_{n} \Vert \to 0\), there exists a subsequence \(\{ x_{n_{k}} \} \) of \(\{ x_{n} \} \) such that \(\{ x_{n_{k}} \} \) converges weakly to z, and since A is a bounded linear operator, \(\{ Ax_{n_{k}} \} \) converges weakly to Az. Setting \(\varpi _{n_{k}} = Ax_{n_{k}} - R_{r_{n_{k}},F_{2}}Ax_{n_{k}}\), it follows tfrom \(\lim_{n \to \infty } \Vert (R_{r_{n},F_{2}} - I)Ax_{n} \Vert = 0\) that \(\lim_{k \to \infty } \varpi _{n_{k}} = 0\). By Lemma 2.1, we have

$$F_{2}(Ax_{n_{k}} - \varpi _{n_{k}},y) + \frac{1}{r_{n_{k}}} \bigl\langle y - (Ax_{n_{k}} - \varpi _{n_{k}}),(Ax_{n_{k}} - \varpi _{n_{k}}) - Ax_{n_{k}} \bigr\rangle \ge 0,\quad \forall y \in C_{2}. $$

Since \(F_{2}\) is upper semicontinuous in the first argument, taking limsup to the above inequality as \(k \to \infty \), we have \(F_{2}(Az,y) \ge 0\), \(\forall y \in C_{2}\), which means that \(Az \in EP(F_{2})\), so \(z \in \Gamma \). Next, we claim that \(z \in \mathrm{Fix}(G)\). From Lemma 2.6, we know \(G = T^{N}\) is nonexpansive, and

$$\begin{aligned} \Vert y_{n} - Gy_{n} \Vert = \bigl\Vert T^{N}u_{n} - T^{N}y_{n} \bigr\Vert \le \Vert u_{n} - y_{n} \Vert . \end{aligned}$$

It follows from \(\lim_{n \to \infty } \Vert u_{n} - x_{n} \Vert = 0\) and \(\lim_{n \to \infty } \Vert x_{n} - y_{n} \Vert = 0\) that \(\lim_{n \to \infty } \Vert y_{n} - Gy_{n} \Vert = 0\). Furthermore, we get

$$\begin{aligned} \Vert x_{n} - Gx_{n} \Vert \le \Vert x_{n} - y_{n} \Vert + \Vert y_{n} - Gy_{n} \Vert + \Vert Gy_{n} - Gx_{n} \Vert \\ \le 2 \Vert x_{n} - y_{n} \Vert + \Vert y_{n} - Gy_{n} \Vert , \end{aligned}$$

which implies \(\lim_{n \to \infty } \Vert x_{n} - Gx_{n} \Vert = 0\). Then, by Lemma 2.10, we obtain \(z \in \mathrm{Fix}(G)\). Thus, we have \(z \in \Theta \). Observe that the constants satisfy \(0 \le \rho \tau < v\) and \(k \ge \eta \), from Lemma 2.7, the operator \(\mu F - \rho U\) is \(\mu \eta - \rho \tau \) strongly monotone. Then we get the uniqueness of the solution of the variational inequality and denote it by \(w \in \Theta \).

Last, we show that \(x_{n} \to w\). Note that

$$\begin{aligned} &\mathop{\lim \sup}_{n \to \infty } \bigl\langle \rho U(w) - \mu F(w),x_{n} - w \bigr\rangle \\ &\quad = \mathop{\lim \sup}_{i \to \infty } \bigl\langle \rho U(w) - \mu F(w),x_{n_{i}} - w \bigr\rangle = \bigl\langle \rho U(w) - \mu F(w),z - w \bigr\rangle \le 0, \end{aligned}$$

and

$$\begin{aligned} &\Vert x_{n + 1} - w \Vert ^{2}\\ &\quad = \bigl\langle P_{C}[V_{n}] - w,x_{n + 1} - w \bigr\rangle \\ &\quad= \bigl\langle P_{C}[V_{n}] - V_{n},P_{C}[V_{n}] - w \bigr\rangle + \langle V_{n} - w,x_{n + 1} - w \rangle \\ &\quad\le \bigl\langle \alpha _{n}\bigl(\rho U(x_{n}) - \mu F(w) \bigr) + (I - \alpha _{n}\mu F) \bigl(T(z_{n})\bigr) - (I - \alpha _{n}\mu F) \bigl(T(w)\bigr),x_{n + 1} - w \bigr\rangle \\ &\quad= \bigl\langle \alpha _{n}\rho \bigl(U(x_{n}) - U(w) \bigr),x_{n + 1} - w \bigr\rangle + \alpha _{n} \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &\qquad{}+ \bigl\langle (I - \alpha _{n}\mu F) \bigl(T(z_{n})\bigr) - (I - \alpha _{n}\mu F) \bigl(T(w)\bigr),x_{n + 1} - w \bigr\rangle \\ &\quad\le \alpha _{n}\rho \tau \Vert x_{n} - w \Vert \Vert x_{n + 1} - w \Vert + \alpha _{n} \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &\qquad{}+ (1 - \alpha _{n}\nu ) \Vert z_{n} - w \Vert \Vert x_{n + 1} - w \Vert \\ &\quad\le \alpha _{n}\rho \tau \Vert x_{n} - w \Vert \Vert x_{n + 1} - w \Vert + \alpha _{n} \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &\qquad{}+ (1 - \alpha _{n}\nu ) \bigl\{ \beta _{n} \Vert Sx_{n} - Sw \Vert + \beta _{n} \Vert Sw - w \Vert + (1 - \beta _{n}) \Vert y_{n} - w \Vert \bigr\} \Vert x_{n + 1} - w \Vert \\ &\quad\le \alpha _{n}\rho \tau \Vert x_{n} - w \Vert \Vert x_{n + 1} - w \Vert + \alpha _{n} \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &\qquad{}+ (1 - \alpha _{n}\nu ) \bigl\{ \beta _{n} \Vert x_{n} - w \Vert + \beta _{n} \Vert Sw - w \Vert + (1 - \beta _{n}) \Vert x_{n} - w \Vert \bigr\} \Vert x_{n + 1} - w \Vert \\ &\quad= \bigl( 1 - \alpha _{n}(\nu - \rho \tau ) \bigr) \Vert x_{n} - w \Vert \Vert x_{n + 1} - w \Vert + \alpha _{n} \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &\qquad{}+ (1 - \alpha _{n}\nu )\beta _{n} \Vert Sw - w \Vert \Vert x_{n + 1} - w \Vert \\ &\quad\le \frac{(1 - \alpha _{n}(\nu - \rho \tau ))}{2} \bigl( \Vert x_{n} - w \Vert ^{2} + \Vert x_{n + 1} - w \Vert ^{2} \bigr) + \alpha _{n} \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &\qquad{}+ (1 - \alpha _{n}\nu )\beta _{n} \Vert Sw - w \Vert \Vert x_{n + 1} - w \Vert , \end{aligned}$$

which implies that

$$\begin{aligned} \Vert x_{n + 1} - w \Vert ^{2} \le{}& \frac{1 - \alpha _{n}(\nu - \rho \tau )}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert x_{n} - w \Vert ^{2} + \frac{2\alpha _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &{}+ \frac{2 ( 1 - \alpha _{n}\nu )\beta _{n}}{1 + \alpha _{n}(\nu - \rho \tau )} \Vert Sw - w \Vert \Vert x_{n + 1} - w \Vert \\ \le{}& \bigl( 1 - \alpha _{n}(\nu - \rho \tau ) \bigr) \Vert x_{n} - w \Vert ^{2} \\ &{}+ \frac{2\alpha _{n} ( \nu - \rho \tau )}{1 + \alpha _{n}(\nu - \rho \tau )} \biggl\{ \frac{1}{\nu - \rho \tau } \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &{} + \frac{ ( 1 - \alpha _{n}\nu )\beta _{n}}{\alpha _{n}(\nu - \rho \tau )} \Vert Sw - w \Vert \Vert x_{n + 1} - w \Vert \biggr\} . \end{aligned}$$

Let \(\sigma _{n} = \Vert x_{n} - w \Vert ^{2},\phi _{n} = \alpha _{n}(\nu - \rho \tau )\) and

$$\begin{aligned} \varphi _{n} = {}&\frac{2\alpha _{n} ( \nu - \rho \tau )}{1 + \alpha _{n}(\nu - \rho \tau )} \biggl\{ \frac{1}{\nu - \rho \tau } \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &{} + \frac{ ( 1 - \alpha _{n}\nu )\beta _{n}}{\alpha _{n}(\nu - \rho \tau )} \Vert Sw - w \Vert \Vert x_{n + 1} - w \Vert \biggr\} . \end{aligned}$$

Then the above inequality turns into the following:

$$\begin{aligned} \sigma _{n + 1} = (1 - \phi _{n})\sigma _{n} + \varphi _{n}. \end{aligned}$$

From Conditions (i) and (ii) of Theorem 3.1, we have

$$\begin{aligned} &\phi _{n} \to 0(n \to \infty )\quad\text{and} \\ &\mathop{\lim \sup}_{n \to \infty } \frac{\varphi _{n}}{\phi _{n}} = \mathop{\lim \sup} _{n \to \infty } \frac{2}{1 + \alpha _{n}(\nu - \rho \tau )} \biggl\{ \frac{1}{\nu - \rho \tau } \bigl\langle \rho U(w) - \mu F(w),x_{n + 1} - w \bigr\rangle \\ &\phantom{\mathop{\lim \sup}_{n \to \infty } \frac{\varphi _{n}}{\phi _{n}} =}{}+ \frac{ ( 1 - \alpha _{n}\nu )\beta _{n}}{\alpha _{n}(\nu - \rho \tau )} \Vert Sw - w \Vert \Vert x_{n + 1} - w \Vert \biggr\} \le 0. \end{aligned}$$

Then all conditions in Lemma 2.3 are satisfied, thus we can get \(\sigma _{n} \to 0\ (n \to \infty )\), that is, \(x_{n} \to w\ (n \to \infty )\). This completes the proof. □

Corollary 3.1

For \(i \in \{ 1,2 \} \), let \(H_{i}\) be a real Hilbert space, \(C_{i}\) be a nonempty closed convex subset of \(H_{i}\), let \(F_{i}:C_{i} \times C_{i} \to R\) be an equilibrium function. Let \(A:H_{1} \to H_{2}\) be bounded linear operators with their adjoint operators \(A^{*}\). Let \(B_{1}\) be \(\xi _{1}\)-inverse-strongly monotone. Let \(F:C_{1} \to C_{1}\) be a k-Lipschitzian mapping and be η-strongly monotone, and let \(U:C_{1} \to C_{1}\) be a τ-Lipschitzian mapping. Let \(S,T:C_{1} \to C_{1}\) be two nonexpansive mappings such that \(\Theta = \Gamma \cap \mathrm{Fix}(G) \cap \mathrm{Fix}(T) \ne \emptyset \). For given \(x_{0} \in C_{1}\) arbitrarily, let the iterative sequences \(\{ u_{n} \} \), \(\{ y_{n} \} \), and \(\{ x_{n} \} \) be generated by

$$\begin{aligned} \textstyle\begin{cases} u_{n} = R_{r_{n},F_{1}}(x_{n} + \gamma A^{*}(R_{r_{n},F_{2}} - I)Ax_{n}), \\ y_{n} = P_{C_{1}}(I - \lambda _{1}B_{1})u_{n}, \\ z_{n} = \beta _{n}Sx_{n} + (1 - \beta _{n})y_{n}, \\ x_{n + 1} = P_{C_{1}}[\alpha _{n}\rho U(x_{n}) + (I - \alpha _{n}\mu F)(T(z_{n}))], \end{cases}\displaystyle \end{aligned}$$
(39)

where \(\{ r_{n} \} \subset (0,\infty ),\gamma \in (0,1 / L_{A}), L_{A}\) is the spectral radius of the operators \(A^{*}A\). Suppose that the parameters satisfy \(0 < \mu < \frac{2\eta }{k^{2}}\), \(k \ge \eta \), \(0 \le \rho \tau < \nu \), where \(\nu = 1 - \sqrt{1 - \mu (2\eta - \mu k)^{2}}\), and \(\{ \alpha _{n} \} \), \(\{ \beta _{n} \} \) are the sequences in \((0,1)\) satisfying the following conditions:

  1. (i)

    \(\lim_{n \to \infty } \alpha _{n} = 0\) and \(\sum_{n = 0}^{\infty } \alpha _{n} = \infty, \sum_{n = 1}^{\infty } \vert \alpha _{n - 1} - \alpha _{n} \vert < \infty \);

  2. (ii)

    \(\mathop{\lim \sup}_{n \to \infty } \frac{\beta _{n}}{\alpha _{n}} = 0\), and \(\beta _{n} \le \alpha _{n} ( n \ge 1 )\), \(\sum_{n = 1}^{\infty } \vert \beta _{n - 1} - \beta _{n} \vert < \infty \);

  3. (iii)

    \(\mathop{\lim \inf}_{n \to \infty } r_{n} > 0\), \(\sum_{n = 1}^{\infty } \vert r_{n - 1} - r_{n} \vert < \infty \).

Then the sequence \(\{ x_{n} \} \) generated by (39) converges strongly to \(w \in \Theta \).

Proof

Putting \(N = 1\) in Theorem 3.1, we can conclude the desired conclusion directly. □

4 Conclusion

In this paper, we considered a hierarchical fixed point problem (2), a split equilibrium problem (4)–(5), and a system of variational inequalities (7) in Hilbert spaces. An iterative algorithm for finding the common element of the solution sets of the three kinds of problems is presented. Strong convergence of the proposed algorithm is proved. The results presented here are new and very interesting.