1 Introduction

Convex optimization is a subject that is widely and increasingly used as a tool to solve various problems arising in applied mathematics including applications in engineering, medicine, economics, management, industry, and other branches of science. This subject is not only expanding in all directions of science but also serves as an interdisciplinary bridge between various branches of science. In order to solve a convex optimization problem, one can use either optimization algorithms or iterative methods to find the feasible solution of the convex optimization problem. Iterative methods are ubiquitous in the theory of convex optimization, and still new iterative and theoretical techniques have been proposed and analyzed for the solution of various real world and theoretical problems which can be modeled in the general framework of convex optimization. Such an algorithm or iterative method deals with the selection of the best out of many possible decisions in a real-life environment, constructing computational methods to find optimal solutions, exploring the theoretical properties and studying the computational performance of numerical algorithms implemented based on computational methods.

Monotone operator theory is a fascinating field of research in nonlinear functional analysis and found valuable applications in the field of convex optimization, subgradients, partial differential equations, variational inequalities, signal and image processing, evolution equations and inclusions; see, for instance, [13, 6, 8, 19, 21, 26, 27, 3035, 43, 47, 50, 52, 53] and the references cited therein. It is remarked that the convex optimization problem can be translated into finding a zero of a maximal monotone operator defined on a Hilbert space. On the other hand, the problem of finding a zero of the sum of two (maximal -) monotone operators is of fundamental importance in convex optimization and variational analysis [38, 46, 56, 57]. The forward-backward algorithm is prominent among various splitting algorithms to find a zero of the sum of two maximal monotone operators [38], see also [58]. The class of splitting algorithms has parallel computing architectures and thus reducing the complexity of the problems under consideration. On the other hand, the forward-backward algorithm efficiently tackles the situation for smooth and/or nonsmooth functions.

In 1964, Polyak [48] employed the inertial extrapolation technique, based on the heavy ball methods of the two-order time dynamical system, to equip the iterative algorithm with fast convergence characteristic, see also [49]. It is remarked that the inertial term is computed by the difference of the two preceding iterations. The inertial extrapolation technique was originally proposed for minimizing differentiable convex functions, but it has been generalized in different ways. The heavy ball method has been incorporated in various iterative algorithms to obtain the fast convergence characteristic; see, for example, [4, 5, 1012, 22, 40, 45] and the references cited therein. It is worth mentioning that the forward-backward algorithm has been modified by employing the heavy ball method for convex optimization problems.

The theory of equilibrium problems is a systematic approach to study a diverse range of problems arising in the field of physics, optimization, variational inequalities, transportation, economics, network and noncooperative games; see, for example, [9, 21, 23] and the references cited therein. The existence result of an equilibrium problem can be found in the seminal work of Blum and Oettli [9]. Moreover, this theory has a computational flavor and flourishes significantly due to an excellent paper of Combettes and Hirstoaga [20]. The classical equilibrium problem theory has been generalized in several interesting ways to solve real world problems. In 2012, Censor et al. [16] proposed a theory regarding split variational inequality problem (SVIP) which aims to solve a pair of variational inequality problems in such a way that the solution of a variational inequality problem, under a given bounded linear operator, solves another variational inequality.

Motivated by the work of Censor et al. [16], Moudafi [44] generalized the concept of SVIP to that of split monotone variational inclusions (SMVIP) which includes, as a special case, split variational inequality problem, split common fixed point problem, split zeroes problem, split equilibrium problem, and split feasibility problem. These problems have already been studied and successfully employed as a model in intensity-modulated radiation therapy treatment planning, see [14, 15]. This formalism is also at the core of modeling of many inverse problems arising for phase retrieval and other real-world problems; for instance, in sensor networks in computerized tomography and data compression; see, for example, [18, 21]. Some methods have been proposed and analyzed to solve split equilibrium problem and mixed split equilibrium problem in Hilbert spaces; see, for example, [24, 25, 28, 29, 36, 37, 51, 54, 59, 60] and the references cited therein. Inspired and motivated by the above-mentioned results and the ongoing research in this direction, we aim to employ the modified inertial forward-backward algorithm to find a common solution of the monotone inclusion problem and the SEP in Hilbert spaces. The proposed algorithm converges weakly to the common solution under a suitable set of control conditions. The strong convergence characteristics of the proposed algorithm is also obtained by employing the shrinking effect of the half space.

The rest of the paper is organized as follows: Sect. 2 contains preliminary concepts and results regarding monotone operator theory and equilibrium problem theory. Section 3 comprises weak and strong convergence results of the proposed algorithm. Section 4 deals with the efficiency of the proposed algorithm and its comparison with the existing algorithm by numerical experiments.

2 Preliminaries

Throughout this section, we first fix some necessary notions and concepts which will be required in the sequel (see [7, 8] for a detailed account). We denote by \(\mathbb{N}\) the set of all natural numbers and by \(\mathbb{R}\) the set of all real numbers, respectively. Let \(C\subseteq \mathcal{H}_{1}\) and \(Q\subseteq \mathcal{H}_{2}\) be two nonempty subsets of real Hilbert spaces \(\mathcal{H}_{1}\) and \(\mathcal{H}_{2}\) with the inner product \(\langle \cdot,\cdot \rangle \) and the associated norm \(\Vert \cdot \Vert \). Let \(x_{n}\rightarrow x\) (resp. \(x_{n}\rightharpoonup x\)) indicate strong convergence (resp. weak convergence) of a sequence \(\{x_{n}\}_{n=1}^{\infty } \) in C.

Let \(A:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}\) be an operator. We denote by \(\operatorname{dom} ( A ) = \{ x\in \mathcal{H}_{1}:Ax\neq \emptyset \} \) the domain of A, by \(\operatorname{Gr}(A)= \{ ( x,u ) \in \mathcal{H}_{1}\times \mathcal{H}_{1}:u\in Ax \} \) the graph of A, and by \(\operatorname{zer} ( A ) = \{ x\in \mathcal{H}_{1}:0\in Ax \} \) the set of zeros of A. The inverse of A, that is, \(A^{-1}\) is defined as \(( u,x ) \in \operatorname{Gr}(A^{-1})\) if and only if \(( x,u ) \in \operatorname{Gr}(A)\) and the resolvent of A is denoted as \(J_{A}= ( \operatorname{Id}+A ) ^{-1}\), where Id denotes the identity operator. It is remarked that \(J_{A}:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}\) is a single-valued and maximal monotone operator provided that A is maximal monotone. Recall that A is said to be: (i) monotone if \(\langle x-y,u-v \rangle \geq 0\) for all \(( x,u ), ( y,v ) \in \operatorname{Gr}(A)\); (ii) maximally monotone if A is monotone and there exists no monotone operator \(B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}\) such that \(\operatorname{Gr}(B)\) properly contains \(\operatorname{Gr}(A)\); (iii) strongly monotone with modulus \(\alpha >0\) such that \(\langle x-y,u-v \rangle \geq \alpha \Vert x-y \Vert ^{2}\) for all \(( x,u ), ( y,v ) \in \operatorname{Gr}(A)\), and (iv) inverse strongly monotone (co-coercive) with parameter β such that \(\langle x-y,Ax-Ay \rangle \geq \beta \Vert Ax-Ay \Vert ^{2}\).

Let \(f:\mathcal{H}_{1}\rightarrow \mathbb{R\cup } \{ +\infty \} \) be a proper convex lower semicontinuous function, and let \(g:\mathcal{H}_{1}\rightarrow \mathbb{R}\) be a convex differentiable and Lipschitz continuous gradient function, then the convex minimization problem for f and g is defined as follows:

$$ \min_{x\in \mathcal{H}_{1}} \bigl\{ f ( x ) +g ( x ) \bigr\} . $$

The subdifferential of a function f is defined and denoted as follows:

$$ \partial f ( x ) = \bigl\{ x^{\ast }\in \mathcal{H}_{1}:f ( y ) \geq f ( x ) + \bigl\langle x^{\ast },y-x \bigr\rangle \text{ for all }y\in \mathcal{H}_{1} \bigr\} . $$

It is remarked that the subdifferential of a proper convex lower semicontinuous function is a maximally monotone operator. The proximity operator of a function f is defined as follows:

$$ \operatorname{prox}_{f}:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}:x\mapsto \underset{y\in \mathcal{H}_{1}}{ \operatorname{argmin}} \biggl( f ( y ) +\frac{1}{2} \Vert x-y \Vert ^{2} \biggr). $$

Note that the proximity operator is linked with the subdifferential operator in such a way that argmin \(( f ) =\operatorname{zer} ( \partial f ) \). Moreover, prox\(_{f}=J_{ \partial f}\). Utilizing the said connection, we state that a monotone inclusion problem with respect to a maximally monotone operator A and an arbitrary operator B is to find

$$ x^{\ast }\in C\quad\text{such that }0\in Ax^{\ast }+Bx^{\ast }. $$
(1)

The solution set of problem (1) is denoted by \(\operatorname{zer} ( A+B ) \).

We now define the concept of split-mixed equilibrium problem (SMEP).

Let \(F:C\times C\rightarrow \mathbb{R}\) and \(G:Q\times Q\rightarrow \mathbb{R}\) be two bifunctions. Let \(\phi _{f}:C\rightarrow \mathcal{H}_{1}\) and \(\phi _{g}:Q\rightarrow \mathcal{H}_{2}\) be two nonlinear operators, and let \(h:\mathcal{H}_{1}\rightarrow \mathcal{H}_{2}\) be a bounded linear operator. A SMEP is to find

$$ x^{\ast }\in C\quad\text{such that }F \bigl( x^{\ast },x \bigr) + \phi _{f}(x)-\phi _{f}\bigl(x^{\ast }\bigr)\geq 0\text{ for all }x\in C $$
(2)

and

$$ y^{\ast }=hx^{\ast }\in Q\quad\text{such that }G \bigl( y^{\ast },y \bigr) +\phi _{g}(y)-\phi _{g} \bigl(y^{\ast }\bigr)\geq 0\text{ for all }y\in Q. $$
(3)

It is remarked that inequality (2) represents the mixed equilibrium problem, and its solution set is denoted by \(\operatorname{MEP}(F,\phi _{f})\). The solution set of the SMEP as defined in (2) and (3) is denoted by

$$ \operatorname{SMEP}(F,\phi _{f},G,\phi _{g}):=\bigl\{ x^{\ast }\in C:x^{\ast }\in \operatorname{MEP}(F, \phi _{f})\text{ and } hx^{\ast }\in \operatorname{MEP}(G,\phi _{g})\bigr\} . $$

Let C be a nonempty closed convex subset of a Hilbert space \(\mathcal{H}_{1}\). For each \(x\in \mathcal{H}_{1}\), there exists a unique nearest point of C, denoted by \(P_{C}x\), such that

$$ \Vert x-P_{C}x \Vert \leq \Vert x-y \Vert \quad\text{ for all }y \in C. $$

Such a mapping \(P_{C}:\mathcal{H}_{1}\rightarrow C\) is known as a metric projection or the nearest point projection of \(\mathcal{H}_{1}\) onto C. Moreover, \(P_{C}\) satisfies nonexpansiveness in a Hilbert space and \(\langle x-P_{C}x,P_{C}x-y \rangle \geq 0\) for all \(x,y\in C\). It is remarked that \(P_{C}\) is a firmly nonexpansive mapping from \(\mathcal{H}_{1} \) onto C, that is,

$$ \Vert P_{C}x-P_{C}y \Vert ^{2}\leq \langle x-y,P_{C}x-P_{C}y \rangle \quad\text{for all }x,y\in C. $$

The following lemma collects some well-known results in the context of a real Hilbert space.

Lemma 2.1

([8])

The following properties hold in a real Hilbert space \(\mathcal{H}_{1}\):

  1. 1

    \(\Vert x-y\Vert ^{2}=\Vert x\Vert ^{2}-\Vert y\Vert ^{2}-2\langle x-y,y \rangle \)for all \(x,y\in \mathcal{H}_{1}\);

  2. 2

    \(\Vert x+y\Vert ^{2}\leq \Vert x\Vert ^{2}+2\langle y,x+y\rangle \)for all \(x,y\in \mathcal{H}_{1}\);

  3. 3

    \(\Vert \alpha x+(1-\alpha )y\Vert ^{2}=\alpha \Vert x\Vert ^{2}+(1- \alpha )\Vert y\Vert ^{2}-\alpha (1-\alpha )\Vert x-y\Vert ^{2}\)for every \(x,y\in \mathcal{H}_{1}\)and \(\mu \in [ 0,1]\).

Lemma 2.2

([13])

Let C be a nonempty closed convex subset of a real Hilbert space \(\mathcal{H}_{1}\), and let \(T:C\rightarrow C\)be a nonexpansive mapping, then \((\operatorname{Id}-T)\)is demiclosed at the origin. That is, if \(\{x_{n}\}\)is a sequence in C such that \(x_{n}\rightharpoonup x\)and \((I-T)x_{n}\rightarrow 0\), then \((I-T)x=0\).

Assumption 2.3

([9])

Let C be a nonempty closed convex subset of a real Hilbert space \(\mathcal{H}_{1}\). Let \(F:C\times C\rightarrow \mathbb{R}\) be a bifunction and \(\phi _{f}: C \rightarrow \mathbb{R}\cup \{+\infty \}\) be a convex lower semicontinuous convex function satisfying the following conditions:

  1. (A1)

    \(F(x,x)=0\) for all \(x\in C\);

  2. (A2)

    F is monotone, i.e., \(F(x,y)+F(y,x)\leq 0\) for all \(x,y\in C\);

  3. (A3)

    for each \(x,y,z\in C\), \(\limsup_{t\rightarrow 0}F(tz+(1-t)x,y) \leq F(x,y)\);

  4. (A4)

    for each \(x\in C\), \(y\mapsto F(x,y)\) is convex and lower semi-continuous.

Lemma 2.4

([41])

Let C be a nonempty closed convex subset of a real Hilbert space \(\mathcal{H}_{1}\), and let \(F,\phi _{f}\)be as in Assumption 2.3such that \(C\cap \operatorname{dom} ( \phi _{f} ) \neq \emptyset \). For \(r>0\)and \(x\in \mathcal{H}_{1}\), there exists \(z\in C\)such that

$$ F(z,y)+\phi _{f}(y)-\phi _{f}(z)+\frac{1}{r}\langle y-z,z-x\rangle \geq 0\quad\textit{for all }y\in C. $$

Moreover, define a mapping \(T_{r}^{F}:\mathcal{H}_{1}\rightarrow C\)by

$$ T_{r}^{F}(x)= \biggl\{ z\in C:F(z,y)+\phi _{f}(y)-\phi _{f}(z)+ \frac{1}{r}\langle y-z,z-x\rangle \geq 0 \textit{ for all }y\in C \biggr\} $$

for all \(x\in \mathcal{H}_{1}\). Then the following results hold:

  1. (1)

    \(T_{r}^{F}\textit{ is single-valued;}\)

  2. (2)

    \(T_{r}^{F}\)is firmly nonexpansive, i.e., for every \(x,y \in \mathcal{H}_{1}, \Vert T_{r}^{F}x-T_{r}^{F}y \Vert ^{2}\leq \langle T_{r}^{F}x-T_{r}^{F}y,x-y \rangle\);

  3. (3)

    \(F(T_{r}^{F})= \{ x\in C:T_{r}^{F}(x)=x \} =\operatorname{MEP}(F, \phi _{f})\), where \(F(T_{r}^{F})\)denotes the set of fixed points of the mapping \(T_{r}^{F}\);

  4. (4)

    \(\operatorname{MEP}(F,\phi _{f})\)is closed and convex.

It is remarked that if \(G:Q\times Q\rightarrow \mathbb{R}\)is a bifunction satisfying conditions (A1)–(A4) and \(\phi _{g}:Q\rightarrow \mathbb{R}\cup \{+\infty \}\)is a proper convex lower semicontinuous function such that \(Q\cup \operatorname{dom} ( \phi _{g} ) \neq \emptyset \), where Q is a nonempty closed convex subset of a Hilbert space \(\mathcal{H}_{2}\). Then, for each \(s>0\)and \(w\in \mathcal{H}_{2}\), we can define the following mapping:

$$ T_{s}^{G}(w)= \biggl\{ d\in C:G(d,e)+\phi _{g}(e)-\phi (d)+\frac{1}{s} \langle e-d,d-w\rangle \geq 0 \textit{ for all }e\in Q \biggr\} , $$

which satisfies

  1. (1)

    \(T_{s}^{G}\)is single-valued;

  2. (2)

    \(T_{s}^{G}\)is firmly nonexpansive;

  3. (3)

    \(F(T_{s}^{G})=\operatorname{MEP}(G,\phi _{g})\);

  4. (4)

    \(\operatorname{MEP}(G,\phi _{g})\)is closed and convex.

Lemma 2.5

([55])

Let E be a Banach space satisfying Opial’s condition, and let \(\{x_{n}\}\)be a sequence in E. Let \(l,m\in E\)be such that \(\lim_{n\rightarrow \infty }\Vert x_{n}-l\Vert \), and let \(\lim_{n\rightarrow \infty }\Vert x_{n}-m\Vert \)exist. If \(\{x_{n_{k}}\}\)and \(\{x_{m_{k}}\}\)are subsequences of \(\{x_{n}\}\)which converge weakly to l and m, respectively, then \(l=m\).

Lemma 2.6

([39])

Let E be a Banach space, and let \(A:E\rightarrow E\)be α-inverse strongly accretive of order q and \(B:E\rightarrow 2^{E}\)be an m-accretive operator. Then we have:

  1. (a)

    For \(r > 0, F(T^{A,B}_{r})=(A+B)^{-1}(0)\);

  2. (b)

    For \(0 < s \leq r\)and \(x \in E, \|x-T^{A,B}_{s}x\|\leq 2\|x-T^{A,B}_{r} \|\).

Lemma 2.7

([39])

Let E be a uniformly convex and q-uniformly smooth Banach space for some \(q\in (0,2]\). Assume that A is single-valued α-inverse strongly accretive of order \(q\in E\). Then, given \(r>0\), there exists a continuous, strictly increasing, and convex function \(\varphi _{q}:\mathbb{R}^{+}\rightarrow \mathbb{R}^{+}\)with \(\varphi _{q}(0)=0\)such that, for all \(x,y\in B_{r}\), \(\Vert T_{r}^{A,B}x-T_{r}^{A,B}y\Vert ^{q}\leq \Vert x-y\Vert ^{q}-r( \alpha q-r^{q-1}k_{q})\Vert Ax-Ay\Vert ^{q}- \varphi _{q}( \Vert (\operatorname{Id}-J_{r}^{B})(\operatorname{Id}-rA)x-(\operatorname{Id}-J_{r}^{B})(\operatorname{Id}-rA)y\Vert )\), where \(k_{q}\)is the q-uniform smoothness coefficient of E.

Lemma 2.8

([4])

Let \(\{\xi _{n}\}\), \(\{\eta _{n}\}\), and \(\{\alpha _{n}\}\)be sequences in \([0,+\infty )\)satisfying \(\xi _{n+1}\leq \xi _{n}+\alpha _{n}(\xi _{n}-\xi _{n-1})+\eta _{n}\)for all \(n\geq 1\)provided \(\sum_{n=1}^{\infty }\eta _{n}<+\infty \)and with \(0\leq \alpha _{n}\leq \alpha <1\)for all \(n\geq 1\). Then the following hold:

  1. (a)

    \(\sum_{n \geq 1}[\xi _{n} - \xi _{n-1}]_{+} < +\infty, where [t]_{+}=\max \{t,0\}\);

  2. (b)

    there exists \(\xi ^{*} \in [0,+\infty )\)such that \(\lim_{n \rightarrow +\infty } \xi _{n} = \xi ^{*}\).

Lemma 2.9

([42])

Let C be a nonempty closed convex subset of a real Hilbert space \(\mathcal{H}_{1}\). For every \(x,y\in \mathcal{H}_{1}\)and \(a\in \mathbb{R}\), the set

$$ D=\bigl\{ v\in C: \Vert y-v \Vert ^{2}\leq \Vert x-v \Vert ^{2}+\langle z,v \rangle +\gamma \bigr\} $$

is closed and convex.

Proposition 2.10

([17])

Let \(q>1\), and let E be a real smooth Banach space with the generalized duality mapping \(j_{q}\). Let \(m\in \mathbb{N}\)be fixed. Let \(\{x_{i}\}_{i=1}^{m}\subset E\)and \(t_{i}\geq 0\)for all \(i=1,2,3,\ldots,m\)with \(\sum_{i=1}^{m}t_{i}\leq 1\). Then we have

$$ \Biggl\Vert \sum_{i=1}^{m}t_{i}x_{i} \Biggr\Vert ^{q}\leq \frac{\sum_{i=1}^{m}t_{i} \Vert x_{i} \Vert ^{q}}{q-(q-1)(\sum_{i=1}^{m}t_{i})}. $$

3 Weak convergence results

In this section, we establish convergence analysis of the inertial forward-backward splitting method for solving the split mixed equilibrium problem together with the monotone inclusion problems in the framework of Hilbert spaces. We first prove the following weak convergence theorem.

Theorem 3.1

Let \(\mathcal{H}_{1}\)and \(\mathcal{H}_{2}\)be two real Hilbert spaces, and let \(C\subseteq \mathcal{H}_{1}\)and \(Q\subseteq \mathcal{H}_{2}\)be nonempty closed convex subsets of \(\mathcal{H}_{1}\)and \(\mathcal{H}_{2}\), respectively. Let \(F:C\times C\rightarrow \mathbb{R}\)and \(G:Q\times Q\rightarrow \mathbb{R}\)be two bifunctions satisfying (A1)–(A4) of Assumption 2.3such that G is upper semicontinuous. Let \(h:\mathcal{H}_{1}\rightarrow \mathcal{H}_{2}\)be a bounded linear operator; let \(\phi _{f}:C\rightarrow \mathcal{H}_{1}\)and \(\phi _{g}:Q\rightarrow \mathcal{H}_{2}\)be proper lower semicontinuous and convex functions such that \(C\cap \operatorname{dom} ( \phi _{f} ) \neq \emptyset \)and \(Q\cap \operatorname{dom} ( \phi _{g} ) \neq \emptyset \). Let \(A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}\)be an α-inverse strongly monotone operator and \(B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}\)be a maximally monotone operator. Assume that \(\varGamma =(A+B)^{-1}(0)\cap \varOmega \neq \emptyset \), where \(\varOmega =\{x^{\ast }\in C:x^{\ast }\in \operatorname{MEP} (F,\phi _{f})\ \textit{and}\ hx^{ \ast }\in \operatorname{MEP}(G,\phi _{g})\}\). For given \(x_{0},x_{1}\in \mathcal{H}_{1}\), let the iterative sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\)be generated by

$$\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}\bigl(\operatorname{Id}-\gamma h^{ \ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr)h \bigr)y_{n}, \\ &x_{n+1} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n},\quad n\geq 1 , \end{aligned}$$
(4)

where \(J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)\)with \(\{s_{n}\}\subset (0,2\alpha ) \)and \(\{\theta _{n}\}\subset [ 0,\theta ]\)for some \(\theta \in [ 0,1)\). Let \(\gamma \in (0,\frac{1}{L})\)such that L is the spectral radius of \(h^{\ast }h\)where \(h^{\ast }\)is the adjoint of h. Let \(\{r_{n}\}\subset (0,\infty )\)and \(\{\alpha _{n}\},\{\beta _{n}\}\)be in \([0,1]\). Assume that the following conditions hold:

  1. C1

    \(\sum_{n=1}^{\infty }\theta _{n}\Vert x_{n}-x_{n-1}\Vert <\infty \);

  2. C2

    \(0<\liminf_{n\rightarrow \infty }\alpha _{n}\leq \limsup_{n \rightarrow \infty }\alpha _{n}<1\);

  3. C3

    \(0<\liminf_{n\rightarrow \infty }\beta _{n}\leq \limsup_{n \rightarrow \infty }\beta _{n}<1\);

  4. C4

    \(\liminf_{n\rightarrow \infty }r_{n}>0\);

  5. C5

    \(0<\liminf_{n\rightarrow \infty }s_{n}\leq \limsup_{n\rightarrow \infty }s_{n}<2\alpha \).

Then the sequence \(\{x_{n}\}\) generated by (4) weakly converges to a point \(\hat{q}\in \varGamma \).

Proof

First we show that \(h^{\ast }(\operatorname{Id}-T_{r_{n}}^{G})h\) is a \(\frac{1}{L}\)-inverse strongly monotone mapping. For this, we utilize the firm nonexpansiveness of \(T_{r_{n}}^{G}\) which implies that \((\operatorname{Id}-T_{r_{n}}^{G}) \) is a 1-inverse strongly monotone mapping. Now, observe that

$$\begin{aligned} &\bigl\Vert h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hx-h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy \bigr\Vert ^{2} \\ &\quad=\bigl\langle h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr) (hx-hy),h^{\ast } \bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr) (hx-hy) \bigr\rangle \\ &\quad=\bigl\langle \bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr) (hx-hy),h^{\ast }h\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr) (hx-hy) \bigr\rangle \\ &\quad\leq L\bigl\langle \bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr) (hx-hy),\bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr) (hx-hy) \bigr\rangle \\ &\quad=L \bigl\Vert \bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr) (hx-hy) \bigr\Vert ^{2} \\ &\quad\leq L\bigl\langle x-y,h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr) (hx-hy)\bigr\rangle \end{aligned}$$

for all \(x,y\in \mathcal{H}_{1}\). So, we observe that \(h^{\ast }(\operatorname{Id}-T_{r_{n}}^{G})h\) is \(\frac{1}{L}\)-inverse strongly monotone. Moreover, \(\operatorname{Id}-\gamma h^{\ast }(\operatorname{Id}-T_{r_{n}}^{G})h\) is nonexpansive provided \(\gamma \in (0,\frac{1}{L})\). Now, we divide the rest of the proof into the following three steps.

Step 1. Show that \(\lim_{n\rightarrow \infty }\Vert x_{n}-\hat{p}\Vert \) exists for every \(\hat{p}\in \varGamma \).

In order to proceed, we first set \(T_{n}=T_{r_{n}}^{F}(\operatorname{Id}-\gamma h^{\ast }(\operatorname{Id}-T_{r_{n}}^{G})h)\) which is quasi-nonexpansive by definition. For any \(\hat{p}\in \varGamma \), we get

$$\begin{aligned} \Vert y_{n}-\hat{p} \Vert &= \bigl\Vert x_{n}+ \theta _{n}(x_{n}-x_{n-1})- \hat{p} \bigr\Vert \\ &= \bigl\Vert (x_{n}-\hat{p})+\theta _{n}(x_{n}-x_{n-1}) \bigr\Vert \\ &\leq \Vert x_{n}-\hat{p} \Vert +\theta _{n} \Vert x_{n}-x_{n-1} \Vert . \end{aligned}$$
(5)

Utilizing (5), we have

$$\begin{aligned} \Vert u_{n}-\hat{p} \Vert &= \bigl\Vert \alpha _{n}y_{n}+(1-\alpha _{n})T_{n}y_{n}-\hat{p} \bigr\Vert \\ &\leq \alpha _{n} \Vert y_{n}-\hat{p} \Vert +(1- \alpha _{n}) \Vert T_{n}y_{n}-\hat{p} \Vert \\ &\leq \Vert y_{n}-\hat{p} \Vert \\ &\leq \Vert x_{n}-\hat{p} \Vert +\theta _{n} \Vert x_{n}-x_{n-1} \Vert . \end{aligned}$$
(6)

It follows from (4), (6), and Lemma 2.7 that

$$\begin{aligned} \Vert x_{n+1}-\hat{p} \Vert &= \bigl\Vert \beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n}-\hat{p} \bigr\Vert \\ &= \bigl\Vert \beta _{n}(u_{n}-\hat{p})+(1-\beta _{n}) ( J_{n}u_{n}- \hat{p} ) \bigr\Vert \\ &\leq \beta _{n} \Vert u_{n}-\hat{p} \Vert +(1-\beta _{n}) \Vert J_{n}u_{n}-\hat{p} \Vert \\ &= \Vert u_{n}-\hat{p} \Vert \\ &\leq \Vert x_{n}-\hat{p} \Vert +\theta _{n} \Vert x_{n}-x_{n-1} \Vert . \end{aligned}$$
(7)

From Lemma 2.8 and (C1), we conclude from estimate (7) that \(\lim_{n\rightarrow \infty }\Vert x_{n}-\hat{p}\Vert \) exists, in particular, \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\) are all bounded.

Step 2. Show that \(x_{n}\rightharpoonup \hat{q}\in (A+B)^{-1}(0)\).

Since \(\hat{p}=J_{n}\hat{p}\), therefore it follows from Lemma 2.1 and Lemma 2.7 that

$$\begin{aligned} \Vert x_{n+1}-\hat{p} \Vert ^{2} = {}&\bigl\Vert \beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n}- \hat{p} \bigr\Vert ^{2} \\ = {}&\bigl\Vert \beta _{n}(u_{n}-\hat{p})+(1-\beta _{n})J_{n}u_{n}-\hat{p} \bigr\Vert ^{2} \\ \leq{}& \beta _{n} \Vert u_{n}-\hat{p} \Vert ^{2}+(1-\beta _{n}) \Vert J_{n}u_{n}- \hat{p} \Vert ^{2} \\ \leq {}& \Vert u_{n}-\hat{p} \Vert ^{2}-(1-\beta _{n})s_{n}(2\alpha -s_{n}) \Vert Au_{n}-A\hat{p} \Vert ^{2} \\ &{}-(1-\beta _{n}) \Vert u_{n}-s_{n}Au_{n}-J_{n}u_{n}+s_{n}A \hat{p} \Vert \\ \leq {}&\alpha _{n} \Vert y_{n}-\hat{p} \Vert ^{2}+(1-\alpha _{n}) \Vert T_{n}y_{n}- \hat{p} \Vert ^{2}-(1-\beta _{n})s_{n}(2\alpha -s_{n}) \Vert Au_{n}-A \hat{p} \Vert ^{2} \\ &{}-(1-\beta _{n}) \Vert u_{n}-s_{n}Au_{n}-J_{n}u_{n}+s_{n}A \hat{p} \Vert \\ \leq{} & \Vert y_{n}-\hat{p} \Vert ^{2}-(1-\beta _{n})s_{n}(2\alpha -s_{n}) \Vert Au_{n}-A\hat{p} \Vert ^{2} \\ &{}-(1-\beta _{n}) \Vert u_{n}-s_{n}Au_{n}-J_{n}u_{n}+s_{n}A \hat{p} \Vert \\ \leq{} & \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}\langle x_{n}-x_{n-1},y_{n}-\hat{p}\rangle -(1-\beta _{n})s_{n}(2\alpha -s_{n}) \Vert Au_{n}-A \hat{p} \Vert ^{2} \\ &{}-(1-\beta _{n}) \Vert u_{n}-s_{n}Au_{n}-J_{n}u_{n}+s_{n}A \hat{p} \Vert . \end{aligned}$$
(8)

As \(\lim_{n\rightarrow \infty }\Vert x_{n}-\hat{p}\Vert \) exists, therefore utilizing (C1), (C3), and (C5), we get

$$ \lim_{n\rightarrow \infty }(1-\beta _{n})s_{n}(2\alpha -s_{n}) \Vert Au_{n}-A\hat{p} \Vert =0. $$
(9)

Also from (8) we get that

$$ \lim_{n\rightarrow \infty } \Vert u_{n}-s_{n}Au_{n}-J_{n}u_{n}+s_{n}A \hat{p} \Vert =0. $$
(10)

Using (9), (10) and the following triangle inequality:

$$ \Vert u_{n}-s_{n}Au_{n}-J_{n}u_{n}+s_{n}A \hat{p} \Vert \leq \Vert u_{n}-J_{n}u_{n} \Vert +s_{n} \Vert Au_{n}-A\hat{p} \Vert , $$

we get

$$ \lim_{n\rightarrow \infty } \Vert J_{n}u_{n}-u_{n} \Vert =0. $$
(11)

Since \(\liminf_{n\rightarrow \infty }s_{n}>0\), therefore there exists \(s>0\) such that \(s_{n}\geq s\) for all \(n\geq 0\). It follows from Lemma 2.6(ii) that

$$ \bigl\Vert T_{s}^{A,B}u_{n}-u_{n} \bigr\Vert \leq 2 \Vert J_{n}u_{n}-u_{n} \Vert . $$

Now, utilizing (11), the above estimate implies that

$$ \lim_{n\rightarrow \infty } \bigl\Vert T_{s}^{A,B}u_{n}-u_{n} \bigr\Vert =0. $$
(12)

From (11), we have

$$ \lim_{n\rightarrow \infty } \Vert x_{n+1}-u_{n} \Vert = \lim_{n \rightarrow \infty }(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert =0. $$
(13)

Again, by Lemma 2.1 and Lemma 2.7, we have

$$\begin{aligned} \Vert x_{n+1}-\hat{p} \Vert ^{2} &\leq \beta _{n} \Vert u_{n}-\hat{p} \Vert ^{2}+(1-\beta _{n}) \Vert J_{n}u_{n}-\hat{p} \Vert ^{2} \\ &\leq \Vert u_{n}-\hat{p} \Vert ^{2} \\ &\leq \alpha _{n} \Vert y_{n}-\hat{p} \Vert ^{2}+(1-\alpha _{n}) \Vert T_{n}y_{n}- \hat{p} \Vert ^{2}-\alpha _{n}(1-\alpha _{n}) \Vert T_{n}y_{n}-y_{n} \Vert ^{2} \\ &\leq \Vert y_{n}-\hat{p} \Vert ^{2}-\alpha _{n}(1-\alpha _{n}) \Vert T_{n}y_{n}-y_{n} \Vert ^{2} \\ &\leq \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}\langle x_{n}-x_{n-1},y_{n}-\hat{p}\rangle -\alpha _{n}(1-\alpha _{n}) \Vert T_{n}y_{n}-y_{n} \Vert ^{2}. \end{aligned}$$

Utilizing (C2), the above estimate implies that

$$ \lim_{n\rightarrow \infty } \Vert T_{n}y_{n}-y_{n} \Vert =0. $$
(14)

Note that

$$ \Vert u_{n}-y_{n} \Vert =(1-\alpha _{n}) \Vert T_{n}y_{n}-y_{n} \Vert . $$

Using (14), the above estimate implies that

$$ \lim_{n\rightarrow \infty } \Vert u_{n}-y_{n} \Vert =0. $$
(15)

By the definition of \(\{y_{n}\}\) and (C1), we have

$$ \lim_{n\rightarrow \infty } \Vert y_{n}-x_{n} \Vert = \lim_{n \rightarrow \infty }\theta _{n} \Vert x_{n}-x_{n-1} \Vert =0. $$
(16)

It follows from (13), (15), and (16) that

$$ \Vert x_{n+1}-x_{n} \Vert \leq \Vert x_{n+1}-u_{n} \Vert + \Vert u_{n}-y_{n} \Vert + \Vert y_{n}-x_{n} \Vert \overset{n\rightarrow \infty }{\longrightarrow }0. $$
(17)

Moreover, from (13) and (17), we have

$$ \Vert u_{n}-x_{n} \Vert \leq \Vert u_{n}-x_{n+1} \Vert + \Vert x_{n+1}-x_{n} \Vert \overset{n\rightarrow \infty }{\longrightarrow }0. $$
(18)

Since \(\{x_{n}\}\) is bounded and \(\mathcal{H}_{1}\) is reflexive, \(\nu _{w}(x_{n})=\{x\in \mathcal{H}_{1}:x_{n_{i}}\rightharpoonup x,\{x_{n_{i}} \}\subset \{x_{n}\}\}\) is nonempty. Let \(\hat{q}\in \nu _{w}(x_{{n}})\) be an arbitrary element. Then there exists a subsequence \(\{x_{n_{i}}\}\subset \{x_{n}\}\) converging weakly to . Let \(\hat{p}\in \nu _{w}(x_{n})\) and \(\{x_{n_{m}}\}\subset \{x_{n}\}\) be such that \(x_{n_{m}}\rightharpoonup \hat{p}\). From (18), we also have \(u_{n_{i}}\rightharpoonup \hat{q}\) and \(u_{n_{m}}\rightharpoonup \hat{p}\). Since \(T_{s}^{A,B}\) is nonexpansive, therefore from (12) and Lemma 2.2, we have \(\hat{p},\hat{q}\in (A+B)^{-1}(0)\). By applying Lemma 2.5, we obtain \(\hat{p}=\hat{q}\).

Step 3. Show that \(\hat{q}\in \varOmega \).

In order to proceed, we first set \(v_{n}=T_{r_{n}}^{F}(\operatorname{Id}-\gamma h^{\ast }(\operatorname{Id}-T_{r_{n}}^{G})h)y_{n}\). Hence, for any \(\hat{p}\in \varGamma \), we calculate the following estimate:

$$\begin{aligned} \Vert v_{n}-\hat{p} \Vert ^{2} ={}& \bigl\Vert T_{r_{n}}^{F}\bigl(\operatorname{Id}-\gamma h^{ \ast } \bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr)h\bigr)y_{n}- \hat{p} \bigr\Vert ^{2} \\ \leq {}& \bigl\Vert y_{n}-\gamma h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n}-\hat{p} \bigr\Vert ^{2} \\ \leq{} & \Vert y_{n}-\hat{p} \Vert ^{2}+\gamma ^{2} \bigl\Vert h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n} \bigr\Vert ^{2}+2\gamma \bigl\langle \hat{p}-y_{n},h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n} \bigr\rangle \\ \leq{} & \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}\langle x_{n}-x_{n-1},y_{n}-\hat{p}\rangle +\gamma ^{2}\bigl\langle hy_{n}-T_{r_{n}}^{G}hy_{n},h^{ \ast }h \bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n}\bigr\rangle \\ &{}+2\gamma \bigl\langle \hat{p}-y_{n},h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr\rangle \\ \leq{} & \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}\langle x_{n}-x_{n-1},y_{n}-\hat{p}\rangle +L\gamma ^{2} \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2} \\ &{}+2\gamma \bigl\langle \hat{p}-y_{n},h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr\rangle . \end{aligned}$$
(19)

Note that

$$\begin{aligned} 2\gamma \bigl\langle \hat{p}-y_{n},h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr\rangle ={}&2\gamma \bigl\langle h(\hat{p}-y_{n}),hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\rangle \\ ={}&2\gamma \bigl\langle h(\hat{p}-y_{n}),\bigl(hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr) \\ &{}-\bigl(hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr),hy_{n}-T_{r_{n}}^{G}hy_{n}\bigr\rangle \\ ={}&2\gamma \bigl[ \bigl\langle Ap-T_{r_{n}}^{G}hy_{n},hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\rangle - \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2}\bigr] \\ \leq{} &2\gamma \biggl[ \frac{1}{2} \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2}- \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2}\biggr] \\ ={}&{-}\gamma \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2}. \end{aligned}$$
(20)

Substituting (20) in (19), we have

$$ \Vert v_{n}-\hat{p} \Vert ^{2}\leq \Vert x_{n}-\hat{p} \Vert ^{2}+2 \theta _{n}\langle x_{n}-x_{n-1},y_{n}-p\rangle +\gamma (L\gamma -1) \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2}. $$
(21)

Moreover,

$$\begin{aligned} \Vert u_{n}-\hat{p} \Vert ^{2} &\leq \alpha _{n} \Vert y_{n}-\hat{p} \Vert ^{2}+(1-\alpha _{n}) \Vert v_{n}-\hat{p} \Vert ^{2} \\ &\leq \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}\langle x_{n}-x_{n-1},y_{n}-p \rangle +\gamma (L\gamma -1) \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2}. \end{aligned}$$

Rearranging the above estimate, we have

$$\begin{aligned} &\gamma (1-L\gamma ) \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert ^{2} \\ &\quad\leq \Vert x_{n}-\hat{p} \Vert ^{2}- \Vert u_{n}-\hat{p} \Vert ^{2}+2 \theta _{n}\langle x_{n}-x_{n-1},y_{n}-p\rangle \\ &\quad\leq \bigl( \Vert x_{n}-\hat{p} \Vert + \Vert u_{n}- \hat{p} \Vert \bigr) \Vert x_{n}-u_{n} \Vert +2\theta _{n} \langle x_{n}-x_{n-1},y_{n}-p \rangle. \end{aligned}$$

Since \(\gamma ( 1-\gamma L ) >0\), therefore utilizing (18) and (C1), the above estimate implies that

$$ \lim_{n\rightarrow \infty } \bigl\Vert hy_{n}-T_{r_{n}}^{G}hy_{n} \bigr\Vert =0. $$
(22)

Note that \(T_{r_{n}}^{F}\) is firmly nonexpansive and \(\operatorname{Id}-\gamma h^{\ast }(\operatorname{Id}-T_{r_{n}}^{G})h\) is nonexpansive, it follows that

$$\begin{aligned} \Vert v_{n}-\hat{p} \Vert ^{2} ={}& \bigl\Vert T_{r_{n}}^{F}\bigl(y_{n}-\gamma h^{ \ast } \bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr)-T_{r_{n}}^{F}\hat{p} \bigr\Vert ^{2} \\ \leq {}&\bigl\langle T_{r_{n}}^{F}\bigl(y_{n}-\gamma h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n}\bigr)-T_{r_{n}}^{F} \hat{p},y_{n}- \gamma h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n}-\hat{p} \bigr\rangle \\ ={}&\bigl\langle v_{n}-\hat{p}, y_{n}-\gamma h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n}-\hat{p}\bigr\rangle \\ ={}&\frac{1}{2}\bigl\{ \Vert v_{n}-\hat{p} \Vert ^{2}+ \bigl\Vert y_{n}-\gamma h^{ \ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n}-\hat{p} \bigr\Vert ^{2} \\ &{}- \bigl\Vert v_{n}-y_{n}+\gamma h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr\Vert ^{2} \bigr\} \\ \leq{} &\frac{1}{2}\bigl\{ \Vert v_{n}-\hat{p} \Vert ^{2}+ \Vert y_{n}-\hat{p} \Vert ^{2}- \bigl\Vert v_{n}-y_{n}+\gamma h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr\Vert ^{2}\bigr\} \\ ={}&\frac{1}{2}\bigl\{ \Vert v_{n}-\hat{p} \Vert ^{2}+ \Vert y_{n}-\hat{p} \Vert ^{2}-\bigl( \Vert v_{n}-y_{n} \Vert ^{2}+\gamma ^{2} \bigl\Vert h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n} \bigr\Vert ^{2} \\ &{}-2\gamma \bigl\langle v_{n}-y_{n}, h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr\rangle \bigr)\bigr\} . \end{aligned}$$

Simplifying the above estimate, we get

$$ \Vert v_{n}-\hat{p} \Vert ^{2}\leq \Vert y_{n}-\hat{p} \Vert ^{2}- \Vert v_{n}-y_{n} \Vert ^{2}+2\gamma \Vert v_{n}-y_{n} \Vert \bigl\Vert h^{ \ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n} \bigr\Vert . $$
(23)

Taking into consideration the variant of (6) and (23), we have

$$\begin{aligned} \Vert u_{n}-\hat{p} \Vert ^{2} \leq{} &\alpha _{n} \Vert y_{n}-\hat{p} \Vert ^{2}+(1-\alpha _{n}) \Vert v_{n}-\hat{p} \Vert ^{2} \\ \leq{} &\alpha _{n} \Vert y_{n}-\hat{p} \Vert ^{2}+(1-\alpha _{n}) \bigl( \Vert y_{n}-\hat{p} \Vert ^{2}- \Vert v_{n}-y_{n} \Vert ^{2} \\ &{}+2\gamma \Vert v_{n}-y_{n} \Vert \bigl\Vert h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n} \bigr\Vert \bigr). \end{aligned}$$

Rearranging the above estimate, we get

$$\begin{aligned} (1-\alpha _{n}) \Vert v_{n}-y_{n} \Vert ^{2} \leq {}& \Vert y_{n}-\hat{p} \Vert ^{2}- \Vert u_{n}-\hat{p} \Vert ^{2}+2\gamma \Vert v_{n}-y_{n} \Vert \bigl\Vert h^{\ast }\bigl( \operatorname{Id}-T_{r_{n}}^{G}\bigr)hy_{n} \bigr\Vert ) \\ \leq {}& \bigl( \Vert y_{n}-\hat{p} \Vert + \Vert u_{n}- \hat{p} \Vert \bigr) \Vert y_{n}-u_{n} \Vert \\ &{}+2\gamma \Vert v_{n}-y_{n} \Vert \bigl\Vert h^{\ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G} \bigr)hy_{n} \bigr\Vert ). \end{aligned}$$

Utilizing (15), (22), and (C2), we have

$$ \lim_{n\rightarrow \infty } \Vert v_{n}-y_{n} \Vert =0. $$
(24)

From (16), (24), and the following triangular inequality

$$ \Vert v_{n}-x_{n} \Vert \leq \Vert v_{n}-y_{n} \Vert + \Vert y_{n}-x_{n} \Vert , $$

we get

$$ \lim_{n\rightarrow \infty } \Vert v_{n}-x_{n} \Vert =0. $$
(25)

It follows from Step 2 that \(x_{n}\rightharpoonup \hat{q}\). Therefore we conclude from (25) that \(v_{n}\rightharpoonup \hat{q}\). Next, we show that \(\hat{q}\in \operatorname{MEP}(F,\phi _{f})\). Since \(v_{n}=T_{r_{n}}^{F}(I-\gamma h^{\ast }(I-T_{r_{n}}^{G})h)y_{n}\), therefore we have

$$ F(v_{n},y)+\phi _{f}(y)-\phi _{f}(v_{n})+ \frac{1}{r_{n}}\bigl\langle y-v_{n},v_{n}-x_{n}- \gamma h^{\ast }\bigl(I-T_{r_{n}}^{G}hy_{n} \bigr)\bigr\rangle \geq 0\quad\text{for all }y\in C. $$

This implies that

$$ F(v_{n},y)+\phi _{f}(y)-\phi _{f}(v_{n})+ \frac{1}{r_{n}}\langle y-v_{n},v_{n}-x_{n} \rangle -\frac{1}{r_{n}}\bigl\langle y-v_{n},\gamma h^{\ast } \bigl(I-T_{r_{n}}^{G}hy_{n}\bigr) \bigr\rangle \geq 0. $$

From Assumption 2.3(A2), we have

$$ \phi _{f}(y)-\phi _{f}(v_{n})+ \frac{1}{r_{n}}\langle y-v_{n},v_{n}-x_{n} \rangle -\frac{1}{r_{n}}\bigl\langle y-v_{n},\gamma h^{\ast } \bigl(I-T_{r_{n}}^{G}hy_{n}\bigr) \bigr\rangle \geq F(y,v_{n}) $$

for all \(y\in C\). Since \(v_{n}\rightharpoonup \hat{q}\), therefore utilizing (25) and (C4), the above estimate implies that

$$ F(y,\hat{q})+\phi _{f}(\hat{q})-\phi _{f}(y)\leq 0 \quad\text{for all } y \in C. $$

Let \(y_{t}=ty+(1-t)\hat{q}\) for some \(1\geq t>0\) and \(y\in C\). Since \(\hat{q}\in C\), this implies that \(y_{t}\in C\) and hence \(F(y_{t},\hat{q})+\phi _{f}(\hat{q})-\phi _{f}(y_{t})\leq 0\). Using Assumption 2.3((A1) and (A4)), it follows that

$$\begin{aligned} 0 &=F(y_{t},y_{t}) \\ &\leq tF(y_{t},y)+(1-t)F(y_{t},\hat{q}) \\ &\leq tF(y_{t},y)+(1-t) \bigl( \phi _{f}(y_{t})- \phi _{f}(\hat{q}) \bigr) \\ &\leq tF(y_{t},y)+(1-t)t \bigl( \phi _{f}(y)-\phi _{f}(\hat{q}) \bigr) \\ &\leq F(y_{t},y)+(1-t) \bigl( \phi _{f}(y)-\phi _{f}(\hat{q}) \bigr) . \end{aligned}$$

Letting \(t\rightarrow 0\), we have

$$ F(\hat{q},y)+\phi _{f}(y)-\phi _{f}(\hat{q})\geq 0 \quad\text{for all } y \in C. $$

This implies that \(\hat{q}\in \operatorname{MEP}(F,\phi _{f})\). It remains to show that \(h\hat{q}\in \operatorname{MEP}(G,\phi _{g})\). Since \(y_{n}\rightharpoonup \hat{q}\) (utilizing estimate (16) and the fact that \(x_{n}\rightharpoonup \hat{q}\)) and h is a bounded linear operator, therefore \(hy_{n}\rightharpoonup h\hat{q}\). Hence, it follows from (22) that

$$ T_{r_{n}}^{G}hy_{n}\rightharpoonup h\hat{q} \quad\text{as } n \rightarrow \infty. $$
(26)

Moreover, Lemma 2.4 implies that

$$ G\bigl(T_{r_{n}}^{G}hy_{n},z\bigr)+\phi _{g}(z)-\phi _{g}\bigl(T_{r_{n}}^{G}hy_{n} \bigr)+ \frac{1}{r_{n}}\bigl\langle z-T_{r_{n}}^{G}hy_{n},T_{r_{n}}^{G}hy_{n}-hy_{n} \bigr\rangle \geq 0 $$

for all \(z\in Q\). Since G is upper semicontinuous in the first argument, taking lim sup of the above estimate as \(n\rightarrow \infty \) and utilizing (C2) and (26), we have

$$ G(h\hat{q},z)+\phi _{g}(z)-\phi _{g}(h\hat{q})\geq 0 $$

for all \(z\in Q\). This implies that \(h\hat{q}\in \operatorname{MEP}(G,\phi _{g})\) and hence \(\hat{q}\in \operatorname{SMEP}(F,\phi _{f},G,\phi _{g})\). From this together with the conclusion of Step 2, we have that \(\hat{q}\in \varGamma \). This completes the proof. □

Remark 3.2

Since the split mixed equilibrium problem contains the following problems:

  1. (i)

    Split equilibrium problem provided that \(\phi _{f}=\phi _{g}=0\);

  2. (ii)

    Mixed equilibrium problem provided that \(G = 0\) and \(\phi _{g}=0\);

  3. (iii)

    Classical equilibrium problem provided that \(G = 0\) and \(\phi _{f}=\phi _{g}=0\).

Hence the following results can be obtained from Theorem 3.1 immediately.

Corollary 3.3

Let \(\mathcal{H}_{1}\)and \(\mathcal{H}_{2}\)be two real Hilbert spaces, and let \(C\subseteq \mathcal{H}_{1}\)and \(Q\subseteq \mathcal{H}_{2}\)be nonempty closed convex subsets of \(\mathcal{H}_{1}\)and \(\mathcal{H}_{2}\), respectively. Let \(F:C\times C\rightarrow \mathbb{R}\)and \(G:Q\times Q\rightarrow \mathbb{R}\)be two bifunctions satisfying (A1)–(A4) of Assumption 2.3such that G is upper semicontinuous. Let \(h:\mathcal{H}_{1}\rightarrow \mathcal{H}_{2}\)be a bounded linear operator, \(A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}\)be an α-inverse strongly monotone operator, and \(B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}\)be a maximally monotone operator. Assume that \(\varGamma =(A+B)^{-1}(0)\cap \varOmega \neq \emptyset \), where \(\varOmega =\{x^{\ast }\in C:x^{\ast }\in EP(F)\ \textit{and}\ hx^{\ast }\in EP(G)\}\). For given \(x_{0},x_{1}\in \mathcal{H}_{1}\), let the iterative sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\)be generated by

$$\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}\bigl(\operatorname{Id}-\gamma h^{ \ast }\bigl(\operatorname{Id}-T_{r_{n}}^{G}\bigr)h \bigr)y_{n} , \\ &x_{n+1} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n},\quad n\geq 1, \end{aligned}$$
(27)

where \(J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)\)with \(\{s_{n}\}\subset (0,2\alpha ) \)and \(\{\theta _{n}\}\subset [ 0,\theta ]\)for some \(\theta \in [ 0,1)\). Let \(\gamma \in (0,\frac{1}{L})\)such that L is the spectral radius of \(h^{\ast }h\)where \(h^{\ast }\)is the adjoint of h. Let \(\{r_{n}\}\subset (0,\infty )\)and \(\{\alpha _{n}\},\{\beta _{n}\}\)be in \([0,1]\). Assume that the following conditions hold:

  1. C1

    \(\sum_{n=1}^{\infty }\theta _{n}\Vert x_{n}-x_{n-1}\Vert <\infty \);

  2. C2

    \(0<\liminf_{n\rightarrow \infty }\alpha _{n}\leq \limsup_{n \rightarrow \infty }\alpha _{n}<1\);

  3. C3

    \(0<\liminf_{n\rightarrow \infty }\beta _{n}\leq \limsup_{n \rightarrow \infty }\beta _{n}<1\);

  4. C4

    \(\liminf_{n\rightarrow \infty }r_{n}>0\);

  5. C5

    \(0<\liminf_{n\rightarrow \infty }s_{n}\leq \limsup_{n\rightarrow \infty }s_{n}<2\alpha \).

Then the sequence \(\{x_{n}\}\) generated by (27) weakly converges to a point \(\hat{q}\in \varGamma \).

Corollary 3.4

Let C be a nonempty closed convex subset of a real Hilbert space \(\mathcal{H}_{1}\). Let \(F:C\times C\rightarrow \mathbb{R}\)be a bifunction satisfying (A1)–(A4) of Assumption 2.3and \(\phi _{f}:C\rightarrow \mathcal{H}_{1}\)be a proper lower semicontinuous and convex function such that \(C\cap \operatorname{dom} ( \phi _{f} ) \neq \emptyset \). Let \(A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}\)be an α-inverse strongly monotone operator and \(B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}\)be a maximally monotone operator. Assume that \(\varGamma =(A+B)^{-1}(0) \cap \operatorname{MEP}(F,\phi _{f})\neq \emptyset \). For given \(x_{0},x_{1}\in \mathcal{H}_{1}\), let the iterative sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\)be generated by

$$\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}y_{n}, \\ &x_{n+1} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n},\quad n\geq 1, \end{aligned}$$
(28)

where \(J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)\)with \(\{s_{n}\}\subset (0,2\alpha ) \)and \(\{\theta _{n}\}\subset [ 0,\theta ]\)for some \(\theta \in [ 0,1)\). Let \(\{r_{n}\}\subset (0,\infty )\)and \(\{\alpha _{n}\},\{\beta _{n}\}\)be in \([0,1]\). Assume that the following conditions hold:

  1. C1

    \(\sum_{n=1}^{\infty }\theta _{n}\Vert x_{n}-x_{n-1}\Vert <\infty \);

  2. C2

    \(0<\liminf_{n\rightarrow \infty }\alpha _{n}\leq \limsup_{n \rightarrow \infty }\alpha _{n}<1\);

  3. C3

    \(0<\liminf_{n\rightarrow \infty }\beta _{n}\leq \limsup_{n \rightarrow \infty }\beta _{n}<1\);

  4. C4

    \(\liminf_{n\rightarrow \infty }r_{n}>0\);

  5. C5

    \(0<\liminf_{n\rightarrow \infty }s_{n}\leq \limsup_{n\rightarrow \infty }s_{n}<2\alpha \).

Then the sequence \(\{x_{n}\}\) generated by (28) weakly converges to a point \(\hat{q}\in \varGamma \).

Corollary 3.5

Let C be a nonempty closed convex subset of a real Hilbert space \(\mathcal{H}_{1}\). Let \(F:C\times C\rightarrow \mathbb{R}\)be a bifunction satisfying (A1)–(A4) of Assumption 2.3. Let \(A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}\)be an α-inverse strongly monotone operator and \(B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}\)be a maximally monotone operator. Assume that \(\varGamma =(A+B)^{-1}(0)\cap EP(F)\neq \emptyset \). For given \(x_{0},x_{1}\in \mathcal{H}_{1}\), let the iterative sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\)be generated by

$$\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}y_{n}, \\ &x_{n+1} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n},\quad n\geq 1, \end{aligned}$$
(29)

where \(J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)\)with \(\{s_{n}\}\subset (0,2\alpha ) \)and \(\{\theta _{n}\}\subset [ 0,\theta ]\)for some \(\theta \in [ 0,1)\). Let \(\{r_{n}\}\subset (0,\infty )\)and \(\{\alpha _{n}\},\{\beta _{n}\}\)be in \([0,1]\). Assume that the following conditions hold:

  1. C1

    \(\sum_{n=1}^{\infty }\theta _{n}\Vert x_{n}-x_{n-1}\Vert <\infty \);

  2. C2

    \(0<\liminf_{n\rightarrow \infty }\alpha _{n}\leq \limsup_{n \rightarrow \infty }\alpha _{n}<1\);

  3. C3

    \(0<\liminf_{n\rightarrow \infty }\beta _{n}\leq \limsup_{n \rightarrow \infty }\beta _{n}<1\);

  4. C4

    \(\liminf_{n\rightarrow \infty }r_{n}>0\);

  5. C5

    \(0<\liminf_{n\rightarrow \infty }s_{n}\leq \limsup_{n\rightarrow \infty }s_{n}<2\alpha \).

Then the sequence \(\{x_{n}\}\) generated by (29) weakly converges to a point \(\hat{q}\in \varGamma \).

4 Strong convergence results

This section is devoted to modifying the sequence \(\{x_{n}\}\) generated by (4) to establish strong convergence results in Hilbert spaces. For this, we equip the proposed sequence with the shrinking projection method.

Theorem 4.1

Let \(\mathcal{H}_{1}\)and \(\mathcal{H}_{2}\)be two real Hilbert spaces, and let \(C\subseteq \mathcal{H}_{1}\)and \(Q\subseteq \mathcal{H}_{2}\)be nonempty closed convex subsets of \(\mathcal{H}_{1}\)and \(\mathcal{H}_{2}\), respectively. Let \(F:C\times C\rightarrow \mathbb{R}\)and \(G:Q\times Q\rightarrow \mathbb{R}\)be two bifunctions satisfying (A1)–(A4) of Assumption 2.3such that G is upper semicontinuous. Let \(h:\mathcal{H}_{1}\rightarrow \mathcal{H}_{2}\)be a bounded linear operator; \(\phi _{f}:C\rightarrow \mathcal{H}_{1}\)and \(\phi _{g}:Q\rightarrow \mathcal{H}_{2}\)be proper lower semicontinuous and convex functions such that \(C\cap \operatorname{dom} ( \phi _{f} ) \neq \emptyset \)and \(Q\cap \operatorname{dom} ( \phi _{g} ) \neq \emptyset \). Let \(A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}\)be an α-inverse strongly monotone operator and \(B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}\)be a maximally monotone operator. Assume that \(\varGamma =(A+B)^{-1}(0)\cap \varOmega \neq \emptyset \), where \(\varOmega =\{x^{\ast }\in C:x^{\ast }\in \operatorname{MEP} (F,\phi _{f})\ \textit{and}\ hx^{ \ast }\in \operatorname{MEP}(G,\phi _{g})\}\). For given \(x_{0},x_{1}\in C_{1}=C\), let the iterative sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\)be generated by

$$\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}\bigl(I-\gamma h^{ \ast } \bigl(I-T_{r_{n}}^{G}\bigr)h\bigr)y_{n}, \\ &z_{n} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n} , \\ &C_{n+1} =\bigl\{ z\in C_{n}: \Vert z_{n}-z \Vert ^{2}\leq \Vert x_{n}-z \Vert ^{2}+2 \theta _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2}-2\theta _{n} \langle x_{n}-z,x_{n-1}-x_{n} \rangle \bigr\} , \\ &x_{n+1} =P_{C_{n+1}}x_{1}, \quad n \geq 1, \end{aligned}$$
(30)

where \(J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)\)with \(\{s_{n}\}\subset (0,2\alpha ) \)and \(\{\theta _{n}\}\subset [ 0,\theta ]\)for some \(\theta \in [ 0,1)\). Let \(\gamma \in (0,\frac{1}{L})\)such that L is the spectral radius of \(h^{\ast }h\)where \(h^{\ast }\)is the adjoint of h. Let \(\{r_{n}\}\subset (0,\infty )\)and \(\{\alpha _{n}\},\{\beta _{n}\}\)be in \([0,1]\). Assume that the following conditions hold:

  1. C1

    \(\sum^{\infty }_{n=1}\theta _{n}\|x_{n}-x_{n-1}\| < \infty \);

  2. C2

    \(0 < \liminf_{n \rightarrow \infty }\alpha _{n} \leq \limsup_{n \rightarrow \infty }\alpha _{n} < 1\);

  3. C3

    \(0 < \liminf_{n \rightarrow \infty }\beta _{n} \leq \limsup_{n \rightarrow \infty }\beta _{n} < 1\);

  4. C4

    \(\liminf_{n \rightarrow \infty }r_{n} > 0\);

  5. C5

    \(0 < \liminf_{n \rightarrow \infty }s_{n} \leq \limsup_{n \rightarrow \infty }s_{n} < 2\alpha \).

Then the sequence \(\{x_{n}\}\) generated by (30) strongly converges to a point \(\hat{q}=P_{\varGamma }x_{1}\).

Proof

The proof is divided into the following steps:

Step 1. Show that the sequence \(\{ x_{n} \} \) defined in (30) is well defined.

We know that \((A+B)^{-1}(0)\) and Ω are closed and convex by Lemmas 2.4 and 2.6, respectively. Moreover, from Lemma 2.9 we have that \(C_{n+1}\) is closed and convex for each \(n\geq 1\). Hence the projection \(P_{C_{n+1}}x_{1}\) is well defined. For any \(\hat{p}\in \varGamma \), it follows from (30), (5), and (6) that

$$\begin{aligned} \Vert z_{n}-\hat{p} \Vert ^{2} &\leq \beta _{n} \Vert u_{n}-\hat{p} \Vert ^{2}+(1-\beta _{n}) \Vert J_{n}u_{n}-\hat{p} \Vert ^{2} \\ &\leq \Vert u_{n}-\hat{p} \Vert ^{2} \\ &\leq \alpha _{n} \Vert y_{n}-\hat{p} \Vert ^{2}+(1-\alpha _{n}) \Vert T_{n}y_{n}- \hat{p} \Vert ^{2} \\ &\leq \Vert y_{n}-\hat{p} \Vert ^{2} \\ &\leq \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}\langle x_{n}-x_{n-1},y_{n}-\hat{p}\rangle \\ &\leq \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2}-2\theta _{n}\langle x_{n}- \hat{p},x_{n-1}-x_{n}\rangle. \end{aligned}$$

It follows from the above estimate that \(\varGamma \subset C_{n+1}\). Summing up these facts, we conclude that \(C_{n+1}\) is nonempty, closed, and convex for all \(n\geq 1\), and hence the sequence \(\{ x_{n} \} \) is well defined.

Step 2. Show that \(\lim_{n\rightarrow \infty }\Vert x_{n}-x_{1}\Vert \) exists.

Since Γ is a nonempty closed and convex subset of \(\mathcal{H}_{1}\), there exists unique \(x^{\ast }\in \varGamma \) such that \(x^{\ast }=P_{\varGamma }x_{1}\). From \(x_{n+1}=P_{C_{n+1}}x_{1}\), we have \(\Vert x_{n+1}-x_{1}\Vert \leq \Vert \hat{p}-x_{1}\Vert \) for all \(\hat{p}\in \varGamma \subset C_{n+1}\). In particular \(\Vert x_{n+1}-x_{1}\Vert \leq \Vert P_{\varGamma }x_{1}-x_{1}\Vert \). This proves that the sequence \(\{x_{n}\}\) is bounded. On the other hand, from \(x_{n}=P_{C_{n}}x_{1}\) and \(x_{n+1}=P_{C_{n+1}}x_{1}\in C_{n+1}\), we get that

$$ \Vert x_{n}-x_{1} \Vert \leq \Vert x_{n+1}-x_{1} \Vert . $$

This implies that \(\{x_{n}\}\) is nondecreasing and hence

$$ \lim_{n\rightarrow \infty } \Vert x_{n}-x_{1} \Vert \quad \text{exists.} $$
(31)

Step 3. Show that \(x_{n}\rightharpoonup \hat{q}\in (A+B)^{-1}(0)\).

In order to proceed, we first calculate the following estimate which is required in the sequel:

$$\begin{aligned} \Vert x_{n+1}-x_{n} \Vert ^{2} &= \Vert x_{n+1}-x_{1}+x_{1}-x_{n} \Vert ^{2} \\ &= \Vert x_{n+1}-x_{1} \Vert ^{2}+ \Vert x_{n}-x_{1} \Vert ^{2}-2 \langle x_{n}-x_{1},x_{n+1}-x_{1} \rangle \\ &= \Vert x_{n+1}-x_{1} \Vert ^{2}+ \Vert x_{n}-x_{1} \Vert ^{2}-2 \langle x_{n}-x_{1},x_{n+1}-x_{n}+x_{n}-x_{1} \rangle \\ &= \Vert x_{n+1}-x_{1} \Vert ^{2}- \Vert x_{n}-x_{1} \Vert ^{2}-2 \langle x_{n}-x_{1},x_{n+1}-x_{n} \rangle \\ &\leq \Vert x_{n+1}-x_{1} \Vert ^{2}- \Vert x_{n}-x_{1} \Vert ^{2}. \end{aligned}$$

Taking limsup on both sides of the above estimate and utilizing (31), we have \(\limsup_{n\rightarrow \infty } \Vert x_{n+1}-x_{n} \Vert ^{2}=0\). That is,

$$ \lim_{n\rightarrow \infty } \Vert x_{n+1}-x_{n} \Vert =0. $$
(32)

Note that \(x_{n+1}\in C_{n+1}\), therefore we have

$$ \Vert z_{n}-x_{n+1} \Vert \leq \Vert x_{n}-x_{n+1} \Vert +2\theta _{n} \Vert x_{n}-x_{n-1} \Vert -2\theta _{n}\langle x_{n}-x_{n+1},x_{n-1}-x_{n} \rangle. $$

Utilizing (32) and (C1), the above estimate implies that

$$ \lim_{n\rightarrow \infty } \Vert z_{n}-x_{n+1} \Vert =0. $$
(33)

From (32), (33), and the following triangular inequality:

$$ \Vert z_{n}-x_{n} \Vert \leq \Vert z_{n}-x_{n+1} \Vert + \Vert x_{n+1}-x_{n} \Vert , $$

we get

$$ \lim_{n\rightarrow \infty } \Vert z_{n}-x_{n} \Vert =0. $$
(34)

Also, from Lemma 2.1, we have

$$\begin{aligned} \Vert z_{n}-\hat{p} \Vert ^{2} &=\beta _{n} \Vert u_{n}-\hat{p} \Vert ^{2}+(1- \beta _{n}) \Vert J_{n}u_{n}-\hat{p} \Vert ^{2}-\beta _{n}(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert ^{2} \\ &\leq \Vert u_{n}-\hat{p} \Vert ^{2}-\beta _{n}(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert ^{2} \\ &\leq \alpha _{n} \Vert y_{n}-\hat{p} \Vert ^{2}+(1-\alpha _{n}) \Vert T_{n}y_{n}- \hat{p} \Vert ^{2}-\beta _{n}(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert ^{2} \\ &\leq \Vert y_{n}-\hat{p} \Vert ^{2}-\beta _{n}(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert ^{2} \\ &\leq \Vert x_{n}-\hat{p} \Vert ^{2}+2\theta _{n}\langle x_{n}-x_{n-1},y_{n}-\hat{p}\rangle -\beta _{n}(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert ^{2}. \end{aligned}$$

Rearranging the above estimate, we have

$$\begin{aligned} &\beta _{n}(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert ^{2} \\ &\quad\leq \Vert x_{n}-\hat{p} \Vert ^{2}- \Vert z_{n}-\hat{p} \Vert ^{2}+2 \theta _{n}\langle x_{n}-x_{n-1},y_{n}- \hat{p}\rangle \\ &\quad\leq \bigl( \Vert x_{n}-\hat{p} \Vert + \Vert z_{n}-\hat{p} \Vert \bigr) \Vert x_{n}-z_{n} \Vert +2\theta _{n}\langle x_{n}-x_{n-1},y_{n}- \hat{p}\rangle. \end{aligned}$$

The above estimate, by using (C1) and (34), implies that

$$ \lim_{n\rightarrow \infty } \Vert J_{n}u_{n}-u_{n} \Vert =0. $$
(35)

Making use of (35), we have the following estimate:

$$ \lim_{n\rightarrow \infty } \Vert z_{n}-u_{n} \Vert = \lim_{n \rightarrow \infty }(1-\beta _{n}) \Vert J_{n}u_{n}-u_{n} \Vert =0. $$
(36)

Reasoning as above, we get from (34) and (36) that

$$ \lim_{n\rightarrow \infty } \Vert u_{n}-x_{n} \Vert =0. $$
(37)

In a similar fashion, we have

$$ \lim_{n\rightarrow \infty } \bigl\Vert T_{s}^{A,B}u_{n}-u_{n} \bigr\Vert =0. $$
(38)

Reasoning as above (Theorem 3.1 Step 2), we have the desired result.

Step 4. Show that \(\hat{q}\in \varOmega \).

See the proof of Step 3 in Theorem 3.1.

Step 5. Show that \(\hat{q}=P_{\varGamma }x_{1}\).

Let \(x=P_{\varGamma }x_{1}\) imply that \(x=P_{\varGamma }x_{1}\in C_{n+1}\). Since \(x_{n+1}=P_{C_{n+1}}x_{1}\in C_{n+1}\), we have

$$ \Vert x_{n+1}-x_{1} \Vert \leq \Vert x-x_{1} \Vert . $$

On the other hand, we have

$$\begin{aligned} \Vert x-x_{1} \Vert &\leq \Vert \hat{q}-x_{1} \Vert \\ &\leq \liminf_{j\rightarrow \infty } \Vert x_{n}-x_{1} \Vert \\ &\leq \limsup_{j\rightarrow \infty } \Vert x_{n}-x_{1} \Vert \\ &\leq \Vert x-x_{1} \Vert . \end{aligned}$$

That is,

$$ \Vert \hat{q}-x_{1} \Vert =\lim_{n\rightarrow \infty } \Vert x_{n}-x_{1} \Vert = \Vert x-x_{1} \Vert . $$

Therefore, we conclude that \(\lim_{n\rightarrow \infty }x_{n}=\hat{q}=P_{\varGamma }x_{1}\). This completes the proof. □

Taking into consideration Remark 3.2, the following results can easily be derived from Theorem 4.1.

Corollary 4.2

Let \(\mathcal{H}_{1}\)and \(\mathcal{H}_{2}\)be two real Hilbert spaces, and let \(C\subseteq \mathcal{H}_{1}\)and \(Q\subseteq \mathcal{H}_{2}\)be nonempty closed convex subsets of \(\mathcal{H}_{1}\)and \(\mathcal{H}_{2}\), respectively. Let \(F:C\times C\rightarrow \mathbb{R}\)and \(G:Q\times Q\rightarrow \mathbb{R}\)be two bifunctions satisfying (A1)–(A4) of Assumption 2.3such that G is upper semicontinuous. Let \(h:\mathcal{H}_{1}\rightarrow \mathcal{H}_{2}\)be a bounded linear operator, \(A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}\)be an α-inverse strongly monotone operator, and \(B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}\)be a maximally monotone operator. Assume that \(\varGamma =(A+B)^{-1}(0)\cap \varOmega \neq \emptyset \), where \(\varOmega =\{x^{\ast }\in C:x^{\ast }\in EP(F)\ \textit{and}\ hx^{\ast }\in EP(G)\}\). For given \(x_{0},x_{1}\in \mathcal{H}_{1}\), let the iterative sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\)be generated by

$$\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}\bigl(I-\gamma h^{ \ast } \bigl(I-T_{r_{n}}^{G}\bigr)h\bigr)y_{n}, \\ &z_{n} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n}, \\ &C_{n+1} =\bigl\{ z\in C_{n}: \Vert z_{n}-z \Vert ^{2}\leq \Vert x_{n}-z \Vert ^{2}+2 \theta _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2}-2\theta _{n} \langle x_{n}-z,x_{n-1}-x_{n} \rangle \bigr\} , \\ &x_{n+1} =P_{C_{n+1}}x_{1}, \quad n \geq 1, \end{aligned}$$
(39)

where \(J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)\)with \(\{s_{n}\}\subset (0,2\alpha )\)and \(\{\theta _{n}\}\subset [ 0,\theta ]\)for some \(\theta \in [ 0,1)\). Let \(\gamma \in (0,\frac{1}{L})\)such that L is the spectral radius of \(h^{\ast }h\)where \(h^{\ast }\)is the adjoint of h. Let \(\{r_{n}\}\subset (0,\infty )\)and \(\{\alpha _{n}\},\{\beta _{n}\}\)be in \([0,1]\). Assume that the following conditions hold:

  1. C1

    \(\sum_{n=1}^{\infty }\theta _{n}\Vert x_{n}-x_{n-1}\Vert <\infty \);

  2. C2

    \(0<\liminf_{n\rightarrow \infty }\alpha _{n}\leq \limsup_{n \rightarrow \infty }\alpha _{n}<1\);

  3. C3

    \(0<\liminf_{n\rightarrow \infty }\beta _{n}\leq \limsup_{n \rightarrow \infty }\beta _{n}<1\);

  4. C4

    \(\liminf_{n\rightarrow \infty }r_{n}>0\);

  5. C5

    \(0<\liminf_{n\rightarrow \infty }s_{n}\leq \limsup_{n\rightarrow \infty }s_{n}<2\alpha \).

Then the sequence \(\{x_{n}\}\) generated by (39) strongly converges to a point \(\hat{q}=P_{\varGamma }x_{1}\).

Corollary 4.3

Let C be a nonempty closed convex subset of a real Hilbert space \(\mathcal{H}_{1}\). Let \(F:C\times C\rightarrow \mathbb{R}\)be a bifunction satisfying (A1)–(A4) of Assumption 2.3, and let \(\phi _{f}:C\rightarrow \mathcal{H}_{1}\)be a proper lower semicontinuous and convex function such that \(C\cap \operatorname{dom} ( \phi _{f} ) \neq \emptyset \). Let \(A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}\)be an α-inverse strongly monotone operator and \(B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}\)be a maximally monotone operator. Assume that \(\varGamma =(A+B)^{-1}(0) \cap \operatorname{MEP}(F,\phi _{f})\neq \emptyset \). For given \(x_{0},x_{1}\in \mathcal{H}_{1}\), let the iterative sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\)be generated by

$$\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}y_{n}, \\ &z_{n} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n}, \\ &C_{n+1} =\bigl\{ z\in C_{n}: \Vert z_{n}-z \Vert ^{2}\leq \Vert x_{n}-z \Vert ^{2}+2 \theta _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2}-2\theta _{n} \langle x_{n}-z,x_{n-1}-x_{n} \rangle \bigr\} , \\ &x_{n+1} =P_{C_{n+1}}x_{1},\quad n \geq 1, \end{aligned}$$
(40)

where \(J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)\)with \(\{s_{n}\}\subset (0,2\alpha ) \)and \(\{\theta _{n}\}\subset [ 0,\theta ]\)for some \(\theta \in [ 0,1)\). Let \(\{r_{n}\}\subset (0,\infty )\)and \(\{\alpha _{n}\},\{\beta _{n}\}\)be in \([0,1]\). Assume that the following conditions hold:

  1. C1

    \(\sum_{n=1}^{\infty }\theta _{n}\Vert x_{n}-x_{n-1}\Vert <\infty \);

  2. C2

    \(0<\liminf_{n\rightarrow \infty }\alpha _{n}\leq \limsup_{n \rightarrow \infty }\alpha _{n}<1\);

  3. C3

    \(0<\liminf_{n\rightarrow \infty }\beta _{n}\leq \limsup_{n \rightarrow \infty }\beta _{n}<1\);

  4. C4

    \(\liminf_{n\rightarrow \infty }r_{n}>0\);

  5. C5

    \(0<\liminf_{n\rightarrow \infty }s_{n}\leq \limsup_{n\rightarrow \infty }s_{n}<2\alpha \).

Then the sequence \(\{x_{n}\}\) generated by (40) strongly converges to a point \(\hat{q}=P_{\varGamma }x_{1}\).

Corollary 4.4

Let C be a nonempty closed convex subset of a real Hilbert space \(\mathcal{H}_{1}\). Let \(F:C\times C\rightarrow \mathbb{R}\)be a bifunction satisfying (A1)–(A4) of Assumption 2.3. Let \(A:\mathcal{H}_{1}\rightarrow \mathcal{H}_{1}\)be an α-inverse strongly monotone operator and \(B:\mathcal{H}_{1}\rightarrow 2^{\mathcal{H}_{1}}\)be a maximally monotone operator. Assume that \(\varGamma =(A+B)^{-1}(0)\cap EP(F)\neq \emptyset \). For given \(x_{0},x_{1}\in \mathcal{H}_{1}\), let the iterative sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\)be generated by

$$\begin{aligned} &y_{n} =x_{n}+\theta _{n}(x_{n}-x_{n-1}), \\ &u_{n} =\alpha _{n}y_{n}+(1-\alpha _{n})T_{r_{n}}^{F}y_{n}, \\ &z_{n} =\beta _{n}u_{n}+(1-\beta _{n})J_{n}u_{n}, \\ &C_{n+1} =\bigl\{ z\in C_{n}: \Vert z_{n}-z \Vert ^{2}\leq \Vert x_{n}-z \Vert ^{2}+2 \theta _{n}^{2} \Vert x_{n}-x_{n-1} \Vert ^{2}-2\theta _{n} \langle x_{n}-z,x_{n-1}-x_{n} \rangle \bigr\} , \\ &x_{n+1} =P_{C_{n+1}}x_{1}, \quad n \geq 1, \end{aligned}$$
(41)

where \(J_{n}=(\operatorname{Id}+s_{n}B)^{-1}(\operatorname{Id}-s_{n}A)\)with \(\{s_{n}\}\subset (0,2\alpha ) \)and \(\{\theta _{n}\}\subset [ 0,\theta ]\)for some \(\theta \in [ 0,1)\). Let \(\{r_{n}\}\subset (0,\infty )\)and \(\{\alpha _{n}\},\{\beta _{n}\}\)be in \([0,1]\). Assume that the following conditions hold:

  1. C1

    \(\sum_{n=1}^{\infty }\theta _{n}\Vert x_{n}-x_{n-1}\Vert <\infty \);

  2. C2

    \(0<\liminf_{n\rightarrow \infty }\alpha _{n}\leq \limsup_{n \rightarrow \infty }\alpha _{n}<1\);

  3. C3

    \(0<\liminf_{n\rightarrow \infty }\beta _{n}\leq \limsup_{n \rightarrow \infty }\beta _{n}<1\);

  4. C4

    \(\liminf_{n\rightarrow \infty }r_{n}>0\);

  5. C5

    \(0<\liminf_{n\rightarrow \infty }s_{n}\leq \limsup_{n\rightarrow \infty }s_{n}<2\alpha \).

Then the sequence \(\{x_{n}\}\) generated by (41) strongly converges to a point \(\hat{q}=P_{\varGamma }x_{1}\).

Remark 4.5

We remark here that condition (C1) can easily be implemented in numerical computation since the value of \(\Vert x_{n}-x_{n-1}\Vert \) is known before choosing \(\theta _{n}\). Moreover, the parameter \(\theta _{n}\) can be taken as \(0\leq \theta _{n}\leq \widehat{\theta _{n}}\),

$$ \widehat{\theta _{n}}= \textstyle\begin{cases} \min \{\frac{z_{n}}{ \Vert x_{n}-x_{n-1} \Vert }, \theta \} & \text{if }x_{n} \neq x_{n-1}; \\ \theta& \text{otherwise,} \end{cases} $$

where \(\{z_{n}\}\) is a positive sequence such that \(\sum_{n=1}^{\infty }z_{n}<\infty \).

5 Examples and numerical results

In this section, we give examples and numerical results to strengthen the theoretical results established in the previous sections.

Example 5.1

Let \(\mathcal{H}_{1} = \mathcal{H}_{2} = \mathbb{R}^{3}\) with the inner product defined by \(\langle x, y\rangle = xy\) for all \(x, y \in \mathbb{R}^{3}\) and the induced usual norm \(|\cdot |\). Let \(C=\{ x \in \mathbb{R}_{+}^{3}|\sqrt{x^{2}_{1}+x^{2}_{2}+x^{2}_{3}} \leq 1 \}\) and \(Q=\{x \in \mathbb{R}_{-}^{3}|\langle a, x\rangle \geq b\}\) where \(a=(2,-1,3)\) and \(b=1\). Let \(F:C \times C \rightarrow \mathbb{R}\) be defined as \(F(x,y)=2\max_{x_{i} \in x, {y_{i} \in y}}x(y-x)\) where \(x =(x_{1},x_{2},x_{3}), y=(y_{1},y_{2},y_{3})\in C\) and \(G(u,v)=\max_{u_{i}\in u,{v_{i}\in v}}u(v-u)\) where \(u =(u_{1},u_{2},u_{3}),v=(v_{1},v_{2},v_{3}) \in Q \). Let the mappings \(\phi _{f}:C \rightarrow \mathcal{H}_{1}\) be defined as \(\phi _{f}(x)=0\) for each \(x \in C\) and \(\phi _{g}:Q \rightarrow H_{2}\) be defined as \(\phi _{g}(u)=0\) for each \(u \in Q\). For \(r > 0\), let \(T^{F}_{r}x=P_{C}x\) and \(T^{G}_{r}x=P_{Q}x\). Moreover, we define three mappings \(h,A,B:\mathbb{R}^{3}\rightarrow \mathbb{R}^{3}\) as follows:

$$h(x)= \begin{pmatrix} 1 & -1 & 5 \\ 0 & 1 & 3 \\ 0 & 0 & 2 \end{pmatrix} \begin{pmatrix} x_{1} \\ x_{2} \\ x_{2} \end{pmatrix}, \qquad Ax=3x+(1,2,1)\quad \text{and}\quad Bx=4x $$

for all \(x =(x_{1},x_{2},x_{3}) \in \mathbb{R}^{3}\).

Choose \(\alpha _{n} =\frac{n}{100n+1}\), \(\beta _{n} =\frac{n}{100n+1}\), \(r_{n} = \frac{1}{5}\), \(L=3\), and \(s = 0.1\).

Since

$$\theta _{n}= \textstyle\begin{cases} \min \{\frac{1}{n^{2} \Vert x_{n}-x_{n-1} \Vert },0.5\}& \text{if }x_{n}\neq x_{n-1}; \\ 0.5& \text{otherwise}, \end{cases} $$

we can construct strongly convergent sequences \(\{x_{n}\}\), \(\{y_{n}\}\), and \(\{u_{n}\}\) as defined in Theorem 4.1.

Proof

It is easy to prove that the bifunctions F and G satisfy Assumption 2.3(A1)–(A4) and G is upper semicontinuous. The operator h is a bounded linear operator on \(\mathbb{R}^{3}\) with adjoint operator \(h^{\ast }\) and \(\|h\| = \|h^{\ast }\| = 3\). Moreover, it is clear that A is \(1/3\)-inverse strongly monotone and B is maximal monotone. Furthermore, it is easy to observe that, for \(s > 0\),

$$\begin{aligned} J^{B}_{s}(x-sAx)&=(I+sB)^{-1}(x-sAx) \\ &= \frac{1-3s}{1+4s}x-\frac{s}{1+4s}(1,2,1). \end{aligned}$$

Note that \(\operatorname{Sol}(\operatorname{MEP}(F,\phi _{f}))=\{0\}=\operatorname{Sol}(\operatorname{MEP}(G,\phi _{g}))\). Hence \(\varGamma = (A + B)^{-1}(0)\cap \varOmega = 0\). Now, we compute our desired sequences in the following six steps.

Step 1. Find \(z \in Q\) such that \(G(z,y)+\phi _{g}(y)-\phi _{g}(z)+\frac{1}{r}\langle y - z,z - hx \rangle \geq 0\) for all \(y \in Q\).

Observe that

$$\begin{aligned} &G(z,y)+\phi _{g}(y)-\phi _{g}(z)+\frac{1}{r}\langle y - z,z - hx \rangle \geq 0 \\ &\quad\Leftrightarrow \quad z(y-z)+\frac{1}{r}\langle y - z,z - hx \rangle \geq 0 \\ &\quad\Leftrightarrow \quad rz(y-z)+(y-z) (z-hx)\geq 0 \\ &\quad\Leftrightarrow \quad (y-z) \bigl((1+r)z-hx\bigr)\geq 0 \end{aligned}$$

for all \(y \in Q\). Thus, by Lemma 2.4(2), we know that \(T^{G}_{r}hx \) is single-valued for each \(x \in Q\). Hence \(z=\frac{hx}{1+r}\).

Step 2. Find \(m \in C\) such that \(m = x-\gamma h^{\ast }(I-T_{r}^{G})hx\).

It follows from Step 1 that

$$\begin{aligned} m&=x-\gamma h^{\ast }\bigl(I-T_{r}^{G}\bigr)hx = x- \gamma h^{\ast }\bigl(I-T_{r}^{G}\bigr)hx \\ &= x - \gamma \biggl(3x-\frac{3(hx)}{1+r}\biggr) \\ &= (1-3\gamma )x+\frac{3\gamma }{1+r}(hx). \end{aligned}$$

Step 3. Find \(u \in C\) such that \(F(u,v)+\phi _{f}(v)-\phi _{f}(u)+\frac{1}{r}\langle v-u,u-m\rangle \geq 0\) for all \(v \in C\). From Step 2, we have

$$\begin{aligned} &F(u,v)+\phi _{f}(v)-\phi _{f}(u)+\frac{1}{r}\langle v-u,u-m\rangle \geq 0\\ &\quad\leftrightarrow \quad (2u)(v - u)+\frac{1}{r}\langle v-u, u-m \rangle \geq 0 \\ &\quad\leftrightarrow\quad r(2u)(v-u)+(v-u)(u-m)\geq 0 \\ &\quad\leftrightarrow\quad (v-u)((1+2r)u-m)\geq 0 \end{aligned}$$

for all \(v \in C\). Similarly, by Lemma 2.4(2), we obtain \(u=\frac{m}{1+2r}=\frac{(1-3\gamma )x}{1+2r}+ \frac{3\gamma hx}{(1+r)(1+2r)}\).

Step 4.

$$\textstyle\begin{cases} x_{0}=x \in \mathbb{R}^{3} \\ y_{n}=x_{n}+\theta _{n}(x_{n}-x_{n-1}) \\ u_{n}=\frac{n}{100n+1} y_{n}+(1-\frac{n}{100n+1})( \frac{(1-3\gamma )x_{n}}{1+2r}+\frac{3\gamma hx_{n}}{(1+r)(1+2r)})y_{n} \\ z_{n}=\frac{n}{100n+1}u_{n}+(1-\frac{n}{100n+1})(\frac{1-3s}{1+4s}x_{n}-\frac{s}{1+4s}Ax_{n})u_{n}.\end{cases} $$

Step 5. Find

\(C_{n+1}=\{z \in C_{n}:\|z_{n}-z\|^{2}\leq \|x_{n}-z\|^{2}+2\theta ^{2}_{n} \|x_{n}-x_{n-1}\|^{2}-2\theta _{n}\langle x_{n}-z,x_{n-1}-x_{n} \rangle \}\).

Since \(\|z_{n}-z\|^{2}\leq \|x_{n}-z\|^{2}+2\theta ^{2}_{n}\|x_{n}-x_{n-1} \|^{2}-2\theta _{n}\langle x_{n}-z,x_{n-1}-x_{n}\rangle \), we have

$$ \frac{ \Vert z_{n} \Vert ^{2}- \Vert x_{n} \Vert ( \Vert x_{n} \Vert +2)-2\theta _{n} \Vert x_{n}-x_{n-1} \Vert ^{2}}{2(( \Vert z_{n} \Vert - \Vert x_{n} \Vert )-\theta _{n} \Vert x_{n}-x_{n-1} \Vert )} \leq \Vert z \Vert . $$

Step 6. Compute the numerical results of \(x_{n+1}=P_{C_{n+1}}x_{1}\).

We provide a numerical test of a comparison between the inertial forward-backward method defined in Theorem 4.1 and the standard forward-backward method (i.e., \(\theta _{n}=0\)). The stopping criterion is defined as \(E_{n}=\|x_{n+1}-x_{n}\|<10^{-9}\). □

The error plotting \(E_{n}\) against each choice in Table 1 is shown in Figs. 13, respectively.

Figure 1
figure 1

This is the graph of Choice 1 of Numerical Example 5.1 mentioned in Table 1

Figure 2
figure 2

This is the graph of Choice 2 of Numerical Example 5.1 mentioned in Table 1

Figure 3
figure 3

This is the graph of Choice 3 of Numerical Example 5.1 mentioned in Table 1

Table 1 Numerical results for Example 5.1

6 Conclusion

From a mathematical formulation of the monotone inclusion problem together with the split mixed equilibrium problem, we have derived in this paper an iterative algorithm comprising a modified version of the forward-backward splitting algorithm and the shrinking projection method in Hilbert spaces. We have shown theoretically that the proposed algorithm exhibits weak and strong convergence characteristics towards the common solution under a suitable set of constraints. It is remarked that the proposed algorithm is easily computable as demonstrated in Sect. 5. Moreover, numerical performance of the proposed algorithm has been established in comparison to the existing algorithms. We are interested in extending these results for various different classes of monotone inclusion problems together with fixed point problems and/or equilibrium problems in Hilbert spaces.