Advertisement

Algorithms for the common solution of the split variational inequality problems and fixed point problems with applications

  • Panisa Lohawech
  • Anchalee KaewcharoenEmail author
  • Ali Farajzadeh
Open Access
Research
  • 269 Downloads

Abstract

In this paper, we establish a new iterative algorithm by combining Nadezhkina and Takahashi’s modified extragradient method and Xu’s algorithm. The mentioned iterative algorithm presents the common solution of the split variational inequality problems and fixed point problems. We show that the sequence produced by our algorithm is weakly convergent. Finally, we give some applications of the main results. This article extends the previous results in this area.

Keywords

Variational inequality problems Extragradient methods CQ algorithms 

MSC

58E35 47H10 

1 Introduction

Variational inequality problem (VIP) is the problem of finding a point \(x^{*}\) in a subset C of a Hilbert space H such that
$$ \bigl\langle f\bigl(x^{*}\bigr), x - x^{*} \bigr\rangle \geq0\quad \text{for all } x \in C, $$
(1.1)
where \(f:C \rightarrow H\) is a mapping, and we denote its solution set of (1.1) by \(\operatorname{VI}(C,f)\). The VIP was introduced by Stampacchia [24]. In 1966, Hartman and Stampacchia [17] suggested the VIP as a tool for the study of partial differential equations. The ideas of the VIP are being applied in many fields including mechanics, nonlinear programming, game theory, economics equilibrium, and so on. Moreover, it contains fixed point problems, optimization problems, complementarity problems, and systems of nonlinear equations as special cases (see [3, 12, 20, 21, 22, 29, 38, 40, 41]). Using the projection technique in [26], we know that the VIP is equivalent to the fixed point problem, that is,
$$ x^{*} \in\operatorname{VI}(C,f) \quad \text{if and only if}\quad x^{*} = P_{C}(I - \gamma f)x^{*}, $$
where \(\gamma> 0\) and \(P_{C}\) is the metric projection of H onto C. In [36], the following sequence \(\{x_{n}\}\) of Picard iterates is a strongly convergent sequence in \(\operatorname{VI}(C,f)\) because \(P_{C}(I - \gamma f)\) is a contraction on C, where f is η-strongly monotone and k-Lipschitz continuous, \(0 < \gamma< \frac{2\eta}{k^{2}}\):
$$ x_{n+1} = P_{C}(I - \gamma f)x_{n}. $$
(1.2)
However, algorithm (1.2) cannot be used to solve VIP when f is monotone and k-Lipschitz continuous, which can be seen from the counterexample in [43]. During the last decade, many authors devoted their attention to studying algorithms for solving the VIP. One of the methods is the extragradient method which was introduced and studied in 1976 by Korpelevich [19] in the finite dimensional Euclidean space \({\mathbb {R}}^{n}\):
$$ \begin{aligned} &y_{n} = P_{C}(x_{n} - \gamma fx_{n}), \\ &x_{n+1} = P_{C}(x_{n} - \gamma fy_{n}), \end{aligned} $$
(1.3)
when f is monotone and k-Lipschitz continuous. Then sequence \(\{ x_{n}\}\) converges to the solution of VIP.
Takahashi and Toyoda [28] illustrated that if \(S:C \rightarrow C\) is a nonexpansive mapping and I is the identity mapping on H, then \(f=I-S\) is \(\frac{1}{2}\)-inverse strongly monotone and \(\operatorname{VI}(C,f)= \operatorname{Fix}(S)\). Motivated and inspired by the mentioned fact, they introduced and studied the following method for finding a common element of \(\operatorname{VI}(C,f)\cap\operatorname{Fix}(S)\):
$$ x_{n+1}=\lambda_{n} x_{n}+(1-\lambda_{n})SP_{C}(x_{n}- \gamma_{n} fx_{n}), $$
(1.4)
when \(S: C \rightarrow C\) is a nonexpansive mapping and \(f:C \rightarrow H\) is a ν-inverse strongly monotone mapping.
After that, Nadezhkina and Takahashi [27] suggested the following modified extragradient method motivated by the idea of Korpelevich [19]:
$$ \begin{aligned} &y_{n} = P_{C}(x_{n}- \gamma_{n} fx_{n}), \\ &x_{n+1} = \lambda_{n} x_{n}+(1- \lambda_{n})SP_{C}(x_{n}-\gamma_{n} fy_{n}), \end{aligned} $$
(1.5)
when \(S: C \rightarrow C\) is a nonexpansive mapping and \(f:C \rightarrow H\) is a monotone and k-Lipschitz continuous mapping. They showed that the sequence generated by the mentioned method converges weakly to an element in \(\operatorname{VI}(C,f)\cap\operatorname{Fix}(S)\).

Since then, it has been used to study the problems of finding a common solution of VIP and fixed point problem (see [42] and the references therein).

The split feasibility problem (SFP) proposed by Censor and Elfving [10] is finding a point
$$ x \in C \quad \text{and} \quad Ax \in Q, $$
(1.6)
where C and Q are nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, and \(A:H_{1} \rightarrow H_{2}\) is a bounded linear operator. Since then, the SFP has been widely used in many applications such as signal processing, intensity-modulation therapy treatment planning, phase retrievals and other fields (see [5, 6, 9, 15, 18, 37] and the references therein).
One of the popular methods for solving the SFP is the CQ algorithm presented by Byrne [5] in 2002 as follows:
$$ x_{n+1}= P_{C}\bigl(x_{n} - \gamma A^{*}(I-P_{Q})Ax_{n}\bigr), $$
(1.7)
where \(0 < \gamma< \frac{2}{\|A\|^{2}}\) and \(A^{*}\) is the adjoint operator of A.
Since (1.7) can be viewed as a fixed point algorithm for averaged mappings, Xu [34] applied the K-M algorithm to present the following algorithm for solving the SFP:
$$ x_{n+1}=(1-\alpha_{n})x_{n}+\alpha_{n} P_{C}\bigl(x_{n} - \gamma A^{*}(I-P_{Q})Ax_{n} \bigr). $$
(1.8)
The split variational inequality problem (SVIP) is the problem of finding a point
$$ \begin{aligned} &x^{*} \in C \quad \text{such that}\quad \bigl\langle f\bigl(x^{*}\bigr), x - x^{*} \bigr\rangle \geq 0,\quad \text{for all } x \in C,\quad \text{and} \\ &y^{*}=Ax^{*} \in Q \quad \text{solves}\quad \bigl\langle g\bigl(y^{*}\bigr), y - y^{*} \bigr\rangle \geq 0, \quad \text{for all } y \in Q, \end{aligned} $$
(1.9)
where C and Q are nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively, \(f:H_{1} \rightarrow H_{1}\) and \(g:H_{2} \rightarrow H_{2}\) are mappings, and \(A: H_{1} \rightarrow H_{2}\) is a bounded linear operator. The SVIP was first investigated by Censor et al. [11]; it includes split feasibility problem, split zero problem, variational inequality problem and split minimizations problem as special cases (see [5, 7, 11, 16, 31, 39]).
In 2017, Tian and Jiang [32] considered the following iteration method by combining extragradient method with CQ algorithm for solving the SVIP:
$$\begin{aligned}& y_{n} = P_{C}\bigl(x_{n} - \gamma_{n} A^{*}\bigl(I - P_{Q}(I-\theta g)\bigr)Ax_{n}\bigr), \\& z_{n} = P_{C}\bigl(y_{n} - \lambda_{n}f(y_{n}) \bigr), \\& x_{n+1} = P_{C}\bigl(y_{n} - \lambda_{n}f(z_{n}) \bigr), \end{aligned}$$
(1.10)
where \(A:H_{1} \rightarrow H_{2}\) is a bounded linear operator, \(f:C \rightarrow H_{1}\) is a monotone and k-Lipschitz continuous mapping, and \(g:H_{2} \rightarrow H_{2}\) is a δ-inverse strongly monotone mapping.

In this paper, we establish a new iterative algorithm by combining Nadezhkina and Takahashi’s modified extragradient method and Xu’s algorithm. The mentioned iterative algorithm presents the common solution of the split variational inequality problems and fixed point problems. We show that the sequence produced by our algorithm is weakly convergent. Finally, we give some applications of the main results. This article extends the results that appeared in [32].

2 Preliminaries

In order to solve the our results, we recall the following definitions and preliminary results that will be used in the sequel. Throughout this section, let C be a closed convex subset of a real Hilbert space H.

A mapping \(T:C \rightarrow C\) is said to be k-Lipschitz continuous with \(k>0\), if
$$ \|Tx-Ty\| \leq k\|x-y\| $$
for all \(x, y \in C\). A mapping T is said to be nonexpansive when \(k=1\). We say that \(x \in C\) is a fixed point of T if \(Tx=x\) and the set of all fixed points of T is denoted by \(\operatorname{Fix}(T)\). It is well known that if C is a nonempty bounded closed convex subset of H and \(T:C \rightarrow C\) is nonexpansive, then \(\operatorname{Fix}(T) \neq\emptyset\). Moreover, for a fixed \(\alpha\in(0,1)\), a mapping \(T:H \rightarrow H\) is α-averaged if and only if it can be written as the average of the identity mapping on H and a nonexpansive mapping \(S:H \rightarrow H\), i.e.,
$$ T = (1-\alpha)I+\alpha S. $$
Recall that a mapping \(f: C \rightarrow H\) is called η-strongly monotone with \(\eta> 0\) if
$$ \langle fx-fy, x-y \rangle\geq\eta\|x-y\|^{2} $$
for all \(x, y \in C\). If \(\eta=0\), then the mapping f is said to be monotone. Further, a mapping f is said to be ν-inverse strongly monotone with \(\nu>0\) (ν-ism) if
$$ \langle fx-fy, x-y \rangle\geq\nu\|fx-fy\|^{2} $$
for all \(x, y \in C\). In [1], we know that a η-strongly monotone mapping f is monotone and a ν-ism mapping f is monotone and \(\frac{1}{\nu}\)-Lipschitz continuous. Moreover, \(I-\lambda f\) is nonexpansive where \(\lambda\in(0,2\nu)\), see [34] for more details on averaged and ν-ism mappings.

Lemma 2.1

([8])

Given\(x \in H\)and\(z \in C\). Then the following statements are equivalent:
  1. (i)

    \(z = P_{C}x\);

     
  2. (ii)

    \(\langle x - z, z - y \rangle\geq0\)for all\(y \in C\);

     
  3. (iii)

    \(\|x - y\|^{2} \geq\|x - z\|^{2} + \|y - z\|^{2}\)for all\(y \in C\).

     

We need the following definitions about set-valued mappings for proving our main results.

Definition 2.2

([30])

Let \(B:H \rightrightarrows H\) be a set-valued mapping with the effective domain \(D(B) = \{x \in H : Bx \neq\emptyset\}\).

The set-valued mapping B is said to be monotone if, for each \(x, y \in D(B)\), \(u \in Bx\), and \(v \in By\), we have
$$\langle x - y,u - v\rangle\geq0. $$

Also the monotone set-valued mapping B is said to be maximal if its graph \(G(B)=\{(x, y) : y \in Bx\}\) is not properly contained in the graph of any other monotone set-valued mappings.

The following property of the maximal monotone mappings is very convenient and helpful to use:

A monotone mapping B is maximal if and only if, for \((x,u) \in H \times H\),
$$\langle x - y,u - v \rangle\geq0 \quad \text{for each } (y, v) \in G(B) \quad \text{implies}\quad u \in Bx. $$
For a maximal monotone set-valued mapping B on H and \(r > 0\), the operator
$$J_{r} := (I + rB)^{-1} : H \rightarrow D(B) $$
is called the resolvent of B.

Remark 2.3

In [14], we obtain that \(\operatorname{Fix}(J_{r}) = B^{-1}0\) for all \(r > 0\) and \(J_{r}\) is firmly nonexpansive, that is,
$$\|J_{r}x - J_{r}y\|^{2} \leq\langle J_{r}x-J_{r}y,x-y \rangle\quad \text{for all } x,y \in H. $$
Indeed, by the definition of scalar multiplication, addition, and inversion operations, we have
$$(x,y)\in G(B) \quad \Leftrightarrow\quad (x+ry,x)\in(I+rB)^{-1}=J_{r}. $$
Hence, for all \((x,y),(x^{*},y^{*})\in G(B)\), we get
$$\begin{aligned} B \text{ is monotone} \quad \Leftrightarrow\quad & \bigl\langle x^{*}-x, y^{*} - y \bigr\rangle \geq0 \\ \quad \Leftrightarrow\quad & \bigl\langle x^{*}-x, ry^{*} - ry \bigr\rangle \geq0 \\ \quad \Leftrightarrow\quad & \bigl\langle x^{*}-x, x^{*}-x+ry^{*} - ry \bigr\rangle \geq \bigl\Vert x^{*}-x \bigr\Vert ^{2} \\ \quad \Leftrightarrow\quad & \bigl\langle J_{r}\bigl(x^{*}+ry^{*} \bigr)-J_{r}(x+ry), \bigl(x^{*}+ry^{*}\bigr) - (x+ry)\bigr\rangle \\ &\quad \geq \bigl\Vert J_{r}\bigl(x^{*}+ry^{*}\bigr)-J_{r}(x+ry) \bigr\Vert ^{2} \\ \quad \Leftrightarrow\quad & J_{r} \text{ is firmly nonexpansive}. \end{aligned}$$
Let \(f:C \rightarrow H\) be a monotone and k-Lipschitz continuous mapping. In [2], we know that a normal cone to C defined by
$$N_{C}x = \bigl\{ z \in H : \langle z, y - x \rangle\leq0, \text{for all } y \in C\bigr\} \quad \text{for all } x \in C $$
is a maximal monotone mapping and a resolvent of \(N_{C}\) is \(P_{C}\).

The following results play the crucial role in the next section.

Lemma 2.4

([27])

Let\(H_{1}\)and\(H_{2}\)be real Hilbert spaces. Let\(B : H_{1} \rightrightarrows{H_{1}}\)be a maximal monotone mapping and\(J_{r}\)be the resolvent ofBfor\(r > 0\). Suppose that\(T : H_{2} \rightarrow H_{2}\)is a nonexpansive mapping and\(A : H_{1} \rightarrow H_{2}\)is a bounded linear operator. Assume that\(B^{-1}0 \cap A^{-1}\operatorname{Fix}(T) \neq\emptyset\). Let\(r, \gamma> 0 \)and\(z \in H_{1}\). Then the following statements are equivalent:
  1. (i)

    \(z = J_{r}(I - \gamma A^{*}(I - T)A)z\);

     
  2. (ii)

    \(0 \in A^{*}(I - T)Az + Bz\);

     
  3. (iii)

    \(z \in B^{-1}0 \cap A^{-1}\operatorname{Fix}(T)\).

     

Lemma 2.5

([23])

Let\(\{\alpha_{n}\}\)be a real sequence satisfying\(0< a \leq\alpha _{n}\leq b< l\)for all\(n \geq0\), and let\(\{v_{n}\}\)and\(\{w_{n}\}\)be two sequences inHsuch that, for some\(\sigma\geq0\),
$$\begin{aligned}& \limsup_{n \rightarrow \infty}\|v_{n}\| \leq \sigma, \\& \limsup_{n \rightarrow \infty}\|w_{n}\| \leq \sigma, \\& \textit{and}\quad \lim_{n \rightarrow \infty} \bigl\Vert \alpha_{n}v_{n}+(1- \alpha_{n})w_{n} \bigr\Vert =\sigma. \end{aligned}$$
Then\(\lim_{n \rightarrow \infty}\|v_{n}-w_{n}\|=0\).

Lemma 2.6

([35])

Let\(\{x_{n}\}\)be a sequence inHsatisfying the properties:
  1. (i)

    \(\lim_{n \rightarrow \infty}\|x_{n}-u\|\)exists for each\(u \in C\);

     
  2. (ii)

    \(\omega_{w}(x_{n}) \subset C\).

     
Then\(\{x_{n}\}\)converges weakly to a point inC.

Theorem 2.7

([27])

Let\(f:C \rightarrow H\)be a monotone andk-Lipschitz continuous mapping. Assume that\(S:C \rightarrow C\)is a nonexpansive mapping such that\(\operatorname {VI}(C,f)\cap\operatorname{Fix}(S) \neq\emptyset\). Let\(\{x_{n}\}\)and\(\{ y_{n}\}\)be sequences generated by (1.5), where\(\{\lambda_{n}\}\subset[a,b]\)for some\(a,b \in(0,\frac{1}{k})\)and\(\{\alpha_{n}\} \subset[c,d]\)for some\(c,d \in(0,1)\). Then the sequences\(\{x_{n}\}\)and\(\{y_{n}\}\)converge weakly to the same point\(z \in\operatorname{VI}(C,f)\cap\operatorname{Fix}(S) \neq\emptyset\), where\(z = \lim_{n \rightarrow \infty} P_{\operatorname{VI}(C,f)\cap\operatorname{Fix}(S)}x_{n}\).

Theorem 2.8

([34])

Assume that the solution set of SFP is consistent and\(0< \gamma< \frac{2}{\|A\|^{2}}\). Let\(\{x_{n}\}\)be defined by the averaged CQ algorithm (1.8) where\(\{\alpha_{n}\}\)is a sequence in\([0,\frac{4}{2+\gamma\|A\|^{2}} ]\)satisfying the condition
$$\sum_{n=1}^{\infty} \alpha_{n} \biggl( \frac{4}{2+\gamma\|A\| ^{2}}-\alpha_{n} \biggr)= \infty. $$
Then the sequence\(\{x_{n}\}\)is weakly convergent to a point in the solution set of SFP.

3 Main results

Our aim in this section is to consider an iterative method by combining Nadezhkina and Takahashi’s modified extragradient method with Zhao and Yang’s algorithm for solving the split variational inequality problems and fixed point problems.

Throughout our results, unless otherwise stated, we assume that C and Q are nonempty closed convex subsets of real Hilbert spaces \(H_{1}\) and \(H_{2}\), respectively. Suppose that \(A : H_{1} \rightarrow H_{2}\) is a nonzero bounded linear operator, \(f : C\rightarrow H_{1}\) is a monotone and k-Lipschitz continuous mapping, and \(g: H_{2} \rightarrow H_{2}\) is a δ-inverse strongly monotone mapping. Suppose that \(T:H_{2} \rightarrow H_{2}\) and \(S : C \rightarrow C\) are nonexpansive. Let \(\{\mu_{n}\}, \{ \alpha_{n}\} \subset(0,1)\), \(\{\gamma_{n}\} \subset[a, b]\) for some \(a, b \in(0, \frac{1}{\|A\|^{2}})\) and \(\{\lambda_{n}\} \subset[c, d]\) for some \(c, d \in(0,\frac{1}{k})\).

Firstly, we present an algorithm for solving the variational inequality problems and split common fixed point problems, that is, finding a point \(x^{*}\) such that
$$ x^{*} \in\operatorname{VI}(C, f) \cap\operatorname{Fix}(S) \quad \text{and}\quad Ax^{*} \in\operatorname{Fix}(T). $$
(3.1)

Theorem 3.1

Set\(\varGamma= \{z \in\operatorname{VI}(C, f)\cap \operatorname{Fix}(S) : Az \in\operatorname{Fix}(T)\}\)and assume that\(\varGamma\neq\emptyset\). Let the sequences\(\{x_{n}\}\), \(\{y_{n}\}\)and\(\{z_{n}\}\)be generated by\(x_{1} = x \in C\)and
$$\begin{aligned}& y_{n} = \mu_{n} x_{n}+(1-\mu_{n})P_{C} \bigl(x_{n} - \gamma_{n} A^{*}(I - T)Ax_{n}\bigr), \\& z_{n} = P_{C}\bigl(y_{n} - \lambda_{n}f(y_{n}) \bigr), \\& x_{n+1} = \alpha_{n} y_{n} + (1- \alpha_{n}) SP_{C}\bigl(y_{n} - \lambda _{n}f(z_{n})\bigr), \end{aligned}$$
(3.2)
for each\(n \in {\mathbb {N}}\). Then the sequence\(\{x_{n}\}\)converges weakly to a point\(z \in\varGamma\), where\(z = \lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\).

Proof

It follows from Theorem 3.1 [32] that \(P_{C}(I-\gamma _{n}A^{*}(I-T)A)\) is \(\frac{1+\gamma_{n}\|A\|^{2}}{2}\)-averaged. It is easy to see from Lemma 2.2 [25] that \(\mu_{n} I+(1-\mu_{n})P_{C}(I - \gamma_{n} A^{*}(I - T)A)\) is \(\mu_{n}+(1-\mu_{n})\frac {1+\gamma_{n}\|A\|^{2}}{2}\)-averaged. So, \(y_{n}\) can be rewritten as
$$ y_{n}=(1-\beta_{n})x_{n}+\beta_{n}V_{n}x_{n}, $$
(3.3)
where \(\beta_{n}=\mu_{n}+(1-\mu_{n})\frac{1+\gamma_{n}\|A\|^{2}}{2}\) and \(V_{n}\) is a nonexpansive mapping for each \(n \in {\mathbb {N}}\).
Let \(u \in\varGamma\), we get that
$$\begin{aligned} \|y_{n}-u\|^{2} =& \bigl\Vert (1-\beta_{n}) (x_{n}-u)+\beta_{n}(V_{n}x_{n}-u) \bigr\Vert ^{2} \\ =& (1-\beta_{n})\|x_{n}-u\|^{2}+ \beta_{n}\|V_{n}x_{n}-u\|^{2} \\ &{} -\beta_{n}(1-\beta_{n})\|x_{n}-V_{n}x_{n} \|^{2} \\ \leq& \|x_{n}-u\|^{2}-\beta_{n}(1- \beta_{n})\|x_{n}-V_{n}x_{n} \|^{2} \\ \leq& \|x_{n} -u\|^{2}. \end{aligned}$$
(3.4)
Thus
$$ \beta_{n}(1-\beta_{n})\|x_{n}-V_{n}x_{n} \|^{2} \leq\|x_{n}-u\|^{2}-\|y_{n}-u \|^{2}. $$
(3.5)
Set \(t_{n}=P_{C}(y_{n}-\lambda_{n} fz_{n})\) for all \(n \geq0\). It follows from Lemma 2.1 that
$$\begin{aligned} \Vert t_{n}-u \Vert ^{2} \leq& \bigl\Vert y_{n}-\lambda_{n} f(z_{n})-u \bigr\Vert ^{2}- \bigl\Vert y_{n}-\lambda _{n}f(z_{n})-t_{n} \bigr\Vert ^{2} \\ \leq& \Vert y_{n}-u \Vert ^{2}- \Vert y_{n}-t_{n} \Vert ^{2}+2\lambda_{n} \bigl\langle f(z_{n}), u-t_{n}\bigr\rangle \\ =& \Vert y_{n}-u \Vert ^{2}- \Vert y_{n}-t_{n} \Vert ^{2}+2\lambda_{n} \bigl(\bigl\langle f(z_{n})-f(u), u-z_{n}\bigr\rangle \\ &{} +\bigl\langle f(u), u-z_{n}\bigr\rangle +\bigl\langle f(z_{n}), z_{n}-t_{n}\bigr\rangle \bigr) \\ \leq& \Vert y_{n}-u \Vert ^{2}- \Vert y_{n}-t_{n} \Vert ^{2}+2\lambda_{n} \bigl\langle f(z_{n}), z_{n}-t_{n}\bigr\rangle \\ =& \Vert y_{n}-u \Vert ^{2}- \Vert y_{n}-z_{n} \Vert ^{2}-2\langle y_{n}-z_{n}, z_{n}-t_{n}\rangle- \Vert z_{n}-t_{n} \Vert ^{2} \\ &{}+2\lambda_{n}\bigl\langle f(z_{n}), z_{n}-t_{n} \bigr\rangle \\ =& \Vert y_{n}-u \Vert ^{2}- \Vert y_{n}-z_{n} \Vert ^{2}- \Vert z_{n}-t_{n} \Vert ^{2} \\ &{} +2\bigl\langle y_{n}-\lambda_{n}f(z_{n})-z_{n}, t_{n}-z_{n}\bigr\rangle . \end{aligned}$$
Using Lemma 2.1 again, this yields
$$\begin{aligned} \bigl\langle y_{n}-\lambda_{n}f(z_{n})-z_{n}, t_{n}-z_{n}\bigr\rangle =& \bigl\langle y_{n}- \lambda_{n}f(y_{n})-z_{n}, t_{n}-z_{n} \bigr\rangle \\ &{} +\bigl\langle \lambda_{n}f(y_{n})-\lambda_{n}f(z_{n}), t_{n}-z_{n}\bigr\rangle \\ \leq& \bigl\langle \lambda_{n}f(y_{n})-\lambda_{n}f(z_{n}), t_{n}-z_{n}\bigr\rangle \\ \leq& \lambda_{n}k\|y_{n}-z_{n}\| \|t_{n}-z_{n}\|, \end{aligned}$$
and so
$$ \|t_{n}-u\|^{2} \leq\|y_{n}-u\|^{2}- \|y_{n}-z_{n}\|^{2}-\|z_{n}-t_{n} \|^{2}+2\lambda_{n}k\| y_{n}-z_{n}\| \|t_{n}-z_{n}\|. $$
For each \(n \in {\mathbb {N}}\), we obtain that
$$\begin{aligned} 0 \leq& \bigl( \Vert t_{n}-z_{n} \Vert - \lambda_{n}k\|y_{n}-z_{n}\|\bigr)^{2} \\ =&\|t_{n}-z_{n}\|^{2}-2\lambda_{n}k \|t_{n}-z_{n}\|\|y_{n}-z_{n}\|+ \lambda_{n}^{2}k^{2}\| y_{n}-z_{n} \|^{2}. \end{aligned}$$
That is,
$$ 2\lambda_{n}k\|t_{n}-z_{n}\|\|y_{n}-z_{n} \| \leq\|t_{n}-z_{n}\|^{2} + \lambda _{n}^{2}k^{2}\|y_{n}-z_{n} \|^{2}. $$
So,
$$\begin{aligned} \|t_{n}-u\|^{2} \leq& \|y_{n}-u\|^{2}- \|y_{n}-z_{n}\|^{2}-\|z_{n}-t_{n} \|^{2}+\|t_{n}-z_{n}\| ^{2} \\ &{}+ \lambda_{n}^{2}k^{2}\|y_{n}-z_{n} \|^{2} \\ =& \|y_{n}-u\|^{2}+ \bigl(\lambda_{n}^{2}k^{2}-1 \bigr)\|y_{n}-z_{n}\|^{2} \\ \leq& \|y_{n}-u\|^{2}. \end{aligned}$$
(3.6)
By the convexity of the norm and (3.6), we have
$$\begin{aligned} \Vert x_{n+1}-u \Vert ^{2} =& \bigl\Vert \alpha_{n}y_{n}+(1-\alpha_{n})S(t_{n})-u \bigr\Vert ^{2} \\ =& \bigl\Vert \alpha_{n}(y_{n}-u)+(1- \alpha_{n}) \bigl(S(t_{n})-u\bigr) \bigr\Vert ^{2} \\ =& \alpha_{n} \Vert y_{n}-u \Vert ^{2}+(1- \alpha_{n}) \bigl\Vert S(t_{n})-u \bigr\Vert ^{2} \\ &{}-\alpha_{n}(1-\alpha_{n}) \bigl\Vert y_{n}-u-\bigl(S(t_{n})-u\bigr) \bigr\Vert ^{2} \\ \leq& \alpha_{n} \Vert y_{n}-u \Vert ^{2}+(1- \alpha_{n}) \bigl\Vert S(t_{n})-S(u) \bigr\Vert ^{2} \\ \leq& \alpha_{n} \Vert y_{n}-u \Vert ^{2}+(1- \alpha_{n}) \Vert t_{n}-u \Vert ^{2} \\ \leq& \alpha_{n} \Vert y_{n}-u \Vert ^{2}+(1- \alpha_{n})\bigl[ \Vert y_{n}-u \Vert ^{2}+ \bigl(\lambda_{n}^{2}k^{2}-1\bigr) \Vert y_{n}-z_{n} \Vert ^{2}\bigr] \\ =& \Vert y_{n}-u \Vert ^{2}+(1-\alpha_{n}) \bigl(\lambda_{n}^{2}k^{2}-1\bigr) \Vert y_{n}-z_{n} \Vert ^{2} \\ \leq& \Vert y_{n}-u \Vert ^{2} \leq \Vert x_{n}-u \Vert ^{2}. \end{aligned}$$
(3.7)
Hence, there exists \(c\geq0\) such that
$$ \lim_{n \rightarrow \infty}\|x_{n}-u\|=c, $$
(3.8)
and then \(\{x_{n}\}\) is bounded. This implies that \(\{y_{n}\}\) and \(\{t_{n}\} \) are also bounded. From (3.5) and (3.7), we deduce that
$$ \beta_{n}(1-\beta_{n})\|x_{n}-V_{n}x_{n} \|^{2} \leq\|x_{n}-u\|^{2}-\|x_{n+1}-u \|^{2}. $$
Therefore, it follows from (3.8) that
$$x_{n}-V_{n}x_{n} \rightarrow 0, \quad \text{as } n\rightarrow \infty. $$
By (3.3), we get that
$$ x_{n}-y_{n} = \beta_{n}(x_{n}-V_{n}x_{n}) \rightarrow 0, \quad \text{as } n\rightarrow \infty. $$
(3.9)
Relation (3.7) implies
$$ (1-\alpha_{n}) \bigl(1-\lambda_{n}^{2}k^{2} \bigr)\|y_{n}-z_{n}\|^{2} \leq \|y_{n}-u \|^{2}-\| x_{n+1}-u\|^{2}, $$
and so
$$ y_{n}-z_{n} \rightarrow 0, \quad \text{as } n\rightarrow \infty. $$
(3.10)
Moreover, by the definition of \(z_{n}\), we have
$$\begin{aligned} \Vert z_{n}-t_{n} \Vert ^{2} =& \bigl\Vert P_{C}\bigl(y_{n}-\lambda_{n}f(y_{n}) \bigr)-P_{C}\bigl(y_{n}-\lambda _{n}f(z_{n}) \bigr) \bigr\Vert ^{2} \\ \leq& \bigl\Vert \bigl(y_{n}-\lambda_{n}f(y_{n}) \bigr)-\bigl(y_{n}-\lambda_{n}f(z_{n})\bigr) \bigr\Vert ^{2} \\ =& \bigl\Vert \lambda_{n}f(z_{n})-\lambda_{n}f(y_{n}) \bigr\Vert ^{2} \\ \leq& \lambda_{n}^{2}k^{2} \Vert z_{n}-y_{n} \Vert ^{2}. \end{aligned}$$
Hence
$$ z_{n} - t_{n} \rightarrow 0, \quad \text{as } n \rightarrow \infty. $$
(3.11)
Using the triangle inequality, we see that
$$ \|y_{n}-t_{n}\| \leq\|y_{n}-z_{n}\| + \|z_{n}-t_{n}\| $$
and
$$ \|x_{n}-z_{n}\| \leq\|x_{n}-y_{n}\|+ \|y_{n}-z_{n}\|. $$
This implies that
$$ y_{n}-t_{n} \rightarrow 0 \quad \text{and}\quad x_{n}-z_{n} \rightarrow 0,\quad \text{as } n \rightarrow \infty. $$
(3.12)
The definition of \(y_{n}\) implies
$$ (1-\mu_{n}) \bigl(x_{n}-P_{C}\bigl(x_{n}- \gamma_{n}A^{*}(I-T)Ax_{n}\bigr)\bigr)=x_{n}-y_{n}. $$
Thus
$$ x_{n}-P_{C}\bigl(x_{n}-\gamma_{n}A^{*}(I-T)Ax_{n} \bigr) \rightarrow 0, \quad \text{as } n \rightarrow \infty . $$
(3.13)
Let \(z \in\omega_{w}(x_{n})\). Then there exists a subsequence \(\{x_{n_{i}}\} \) of \(\{x_{n}\}\) which converges weakly to z. We obtain that \(\{ A^{*}(I-T)Ax_{n_{i}}\}\) is bounded because \(A^{*}(I-T)A\) is \(\frac{1}{2\|A\| ^{2}}\)-inverse strongly monotone. By the firm nonexpansiveness of \(P_{C}\), we see that
$$\begin{aligned}& \bigl\Vert P_{C}\bigl(I-\gamma_{n_{i}}A^{*}(I-T)A \bigr)x_{n_{i}}-P_{C}\bigl(I-\hat{\gamma }A^{*}(I-T)A \bigr)x_{n_{i}} \bigr\Vert \\& \quad \leq |\gamma_{n_{i}}-\hat{\gamma}| \bigl\Vert A^{*}(I-T)Ax_{n_{i}} \bigr\Vert . \end{aligned}$$
Without loss of generality, we may assume that \(\gamma_{n_{i}} \rightarrow \hat {\gamma} \in(0,\frac{1}{\|A\|^{2}})\), and so
$$ P_{C}\bigl(I-\gamma_{n_{i}}A^{*}(I-T)A\bigr)x_{n_{i}}-P_{C} \bigl(I-\hat{\gamma }A^{*}(I-T)A\bigr)x_{n_{i}} \rightarrow 0,\quad i \rightarrow \infty. $$
(3.14)
From (3.13), (3.14) and
$$\begin{aligned}& \bigl\Vert x_{n_{i}}-P_{C}\bigl(I-\hat{\gamma}A^{*}(I-T)A \bigr)x_{n_{i}} \bigr\Vert \\& \quad \leq \bigl\Vert x_{n_{i}}- P_{C}\bigl(I- \gamma_{n_{i}}A^{*}(I-T)A\bigr)x_{n_{i}} \bigr\Vert \\& \qquad {}+ \bigl\Vert P_{C}\bigl(I-\gamma_{n_{i}}A^{*}(I-T)A \bigr)x_{n_{i}}-P_{C}\bigl(I-\hat {\gamma}A^{*}(I-T)A \bigr)x_{n_{i}} \bigr\Vert , \end{aligned}$$
we have
$$ x_{n_{i}}-P_{C}\bigl(I-\hat{\gamma}A^{*}(I-T)A \bigr)x_{n_{i}} \rightarrow 0, \quad \text{as } i \rightarrow \infty. $$
(3.15)
By the demiclosedness principle [33], we have
$$z \in\operatorname{Fix}\bigl(P_{C}\bigl(I-\hat{\gamma}A^{*}(I-T)A\bigr) \bigr). $$
Using Corollary 2.9 [32], this yields
$$ z \in C \cap A^{-1}\operatorname{Fix}(T). $$
(3.16)
Next, we claim that z\(\operatorname{VI}(C,f)\). From (3.9), (3.10) and (3.11), we know that \(y_{n_{i}} \rightharpoonup z\), \(z_{n_{i}} \rightharpoonup z\) and \(t_{n_{i}} \rightharpoonup z\). Define the set-valued mapping \(B: H \rightrightarrows H\) by
$$ Bv = \textstyle\begin{cases} f(v) + N_{C}v, &\text{if } \forall v\in C; \\ \emptyset, &\text{if } \forall v\notin C. \end{cases} $$
In [27], this implies that B is maximal monotone and we have \(0 \in Bv\) iff \(v \in\operatorname{VI}(C,f)\). If \((v,w) \in D(B)\), then \(w \in Bv=f(v)+N_{C}v\) and so \(w-f(v) \in N_{C}v\). Thus, for any \(p \in C\), we get
$$ \bigl\langle v-p, w-f(v)\bigr\rangle \geq0. $$
(3.17)
Since \(v \in C\), it follows from the definition of \(z_{n}\) and Lemma 2.1 that
$$ \langle y_{n} - \lambda_{n}fy_{n}-z_{n},z_{n}-v \rangle\geq0. $$
Consequently,
$$ \biggl\langle \frac{z_{n}-y_{n}}{\lambda_{n}}+f(y_{n}), v-z_{n}\biggr\rangle \geq0. $$
By using (3.17) with \(\{z_{n_{i}}\}\), we obtain
$$ \bigl\langle w -f(v), v -z_{n_{i}}\bigr\rangle \geq0. $$
Thus
$$\begin{aligned} \langle w,v-z_{n_{i}}\rangle \geq& \bigl\langle f(v),v-z_{n_{i}} \bigr\rangle \\ \geq& \bigl\langle f(v),v-z_{n_{i}}\bigr\rangle - \biggl\langle \frac {z_{n_{i}}-y_{n_{i}}}{\lambda_{n_{i}}}+ f(y_{n_{i}}),v-z_{n_{i}}\biggr\rangle \\ =& \bigl\langle f(v)-f(z_{n_{i}}),v-z_{n_{i}}\bigr\rangle + \bigl\langle f(z_{n_{i}}) - f(y_{n_{i}}) ,v-z_{n_{i}}\bigr\rangle \\ &{}-\biggl\langle \frac{z_{n_{i}}-y_{n_{i}}}{\lambda_{n_{i}}},v-z_{n_{i}}\biggr\rangle \\ \geq& \bigl\langle f(z_{n_{i}}) - f(y_{n_{i}}) ,v-z_{n_{i}}\bigr\rangle -\biggl\langle \frac{z_{n_{i}}-y_{n_{i}}}{\lambda_{n_{i}}},v-z_{n_{i}} \biggr\rangle . \end{aligned}$$
By taking \(i\rightarrow \infty\) in the above inequality, we deduce
$$ \langle w,v-z\rangle\geq0. $$
By the maximal monotonicity of B, we get \(0 \in Bz\) and so \(z \in \operatorname{VI}(C,f)\). Now, we will show that \(z \in\operatorname{Fix}(S)\). Since S is nonexpansive, it follows from (3.4) and (3.6) that
$$ \bigl\Vert S(t_{n})-u \bigr\Vert = \bigl\Vert S(t_{n})-S(u) \bigr\Vert \leq \Vert t_{n}-u \Vert \leq \Vert y_{n}-u \Vert \leq \Vert x_{n}-u \Vert , $$
and by taking limit superior in the above inequalities and using (3.8), we obtain
$$ \limsup_{n \rightarrow \infty} \bigl\| S(t_{n})-u\bigr\| \leq c \quad \text{and} \quad \limsup_{n \rightarrow \infty} \|y_{n} -u \| \leq c. $$
Further,
$$\begin{aligned} \lim_{n \rightarrow \infty} \bigl\Vert \alpha_{n}(y_{n}-u)+(1- \alpha _{n}) \bigl(S(t_{n})-u\bigr) \bigr\Vert =& \lim _{n \rightarrow \infty} \bigl\Vert \alpha _{n}y_{n}+(1- \alpha_{n})S(t_{n})-u \bigr\Vert \\ =& \lim_{n \rightarrow \infty} \Vert x_{n+1}-u \Vert \\ =& c, \end{aligned}$$
and so Lemma 2.5 implies
$$ \lim_{n \rightarrow \infty} \bigl\| S(t_{n})-y_{n}\bigr\| =0. $$
(3.18)
From the fact that
$$\begin{aligned} \bigl\Vert S(y_{n})-y_{n} \bigr\Vert =& \bigl\Vert S(y_{n})-S(t_{n})+S(t_{n})-y_{n} \bigr\Vert \\ \leq& \bigl\Vert S(y_{n})-S(t_{n}) \bigr\Vert + \bigl\Vert S(t_{n})-y_{n} \bigr\Vert \\ \leq& \Vert y_{n}-t_{n} \Vert + \bigl\Vert S(t_{n})-y_{n} \bigr\Vert , \end{aligned}$$
(3.12) and (3.18), we have
$$ \lim_{n \rightarrow \infty} \bigl\Vert S(y_{n})-y_{n} \bigr\Vert =0. $$
This implies that
$$ \lim_{i \rightarrow \infty} \bigl\Vert (I-S) (y_{n_{i}}) \bigr\Vert = \lim_{i \rightarrow \infty} \bigl\Vert y_{n_{i}}-S(y_{n_{i}}) \bigr\Vert =0. $$
Now, by the demiclosedness principle [33], we have \(z \in\operatorname{Fix}(S)\). Consequently, \(\omega_{w}(x_{n}) \subset\varGamma\). By Lemma 2.6, the sequence \(\{x_{n}\}\) is weakly convergent to a point z in Γ and Lemma 3.2 [28] assures \(z=\lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\). □

Remark 3.2

We can obtain the following statements:
  1. (i)

    If \(f=0\), \(T=P_{Q}\), and \(S=I\), then problem (3.1) coincides with the SFP and algorithm (3.2) reduces to algorithm (1.8) for solving the SFP.

     
  2. (ii)

    If \(T=I\), then problem (3.1) coincides with the VIP and FPP and algorithm (3.2) reduces to algorithm (1.5) for solving the VIP and FPP.

     
  3. (iii)

    If \(S=I\), then problem (3.1) coincides with problem 3.1 in [32] and if \(\alpha_{n}, \mu_{n} =0\), we obtain that algorithm (3.2) reduces to algorithm 3.2 in [32].

     
The following result provides suitable conditions in order to guarantee the existence of a common solution of the split variational inequality problems and fixed point problems, that is, finding a point \(x^{*}\) such that
$$ x^{*}\in\operatorname{VI}(C,f) \cap\operatorname{Fix}(S)\quad \text{and}\quad Ax^{*} \in\operatorname{VI}(Q,g). $$
(3.19)

Theorem 3.3

Set\(\varGamma= \{z \in\operatorname{VI}(C,f) \cap\operatorname{Fix}(S) : Az \in\operatorname{VI}(Q,g)\}\)and assume that\(\varGamma \neq\emptyset\). Let the sequences\(\{x_{n}\}\), \(\{y_{n}\}\)and\(\{z_{n}\}\)be generated by\(x_{1} = x \in C\)and
$$\begin{aligned}& y_{n} = \mu_{n} x_{n}+(1-\mu_{n})P_{C} \bigl(x_{n} - \gamma_{n} A^{*}\bigl(I - P_{Q}(I-\theta g)\bigr)Ax_{n}\bigr), \\& z_{n} = P_{C}\bigl(y_{n} - \lambda_{n}f(y_{n}) \bigr), \\& x_{n+1} = \alpha_{n} y_{n} + (1- \alpha_{n}) SP_{C}\bigl(y_{n} - \lambda _{n}f(z_{n})\bigr), \end{aligned}$$
(3.20)
for each\(n \in N\), where\(\theta\in(0,2\delta)\). Then the sequence\(\{x_{n}\}\)converges weakly to a point\(z \in\varGamma\), where\(z = \lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\).

Proof

It is clear from δ-inverse strongly monotonicity of g that it is \(\frac{1}{\delta}\)-Lipschitz continuous and so, for \(\theta\in (0,2\delta)\), we obtain that \(I-\theta g\) is nonexpansive. Since \(P_{Q}\) is firmly nonexpansive, then \(P_{Q}(I-\theta g)\) is nonexpansive. By taking \(T=P_{Q}(I-\theta g)\) in Theorem 3.1, we obtain that \(\{ x_{n}\}\) converges weakly to a point \(z\in\operatorname{VI}(C,f) \cap\operatorname {Fix}(S)\) and \(Az \in\operatorname{Fix}(P_{Q}(I-\theta g))\). It follows from \(Az =P_{Q}(I-\theta g)Az\) and Lemma 2.1 that \(Az \in\operatorname {VI}(Q,g)\). This completes the proof. □

Remark 3.4

We can obtain the following statements:
  1. (i)

    If \(f=0\), \(g=0\), and \(S=I\), then problem (3.19) coincides with the SFP and algorithm (3.20) reduces to algorithm (1.8) for solving the SFP.

     
  2. (ii)

    If \(g=0\) and \(Q=H_{2}\), then problem (3.19) coincides with the VIP and FPP and algorithm (3.20) reduces to algorithm (1.5) for solving the VIP and FPP.

     
  3. (iii)

    If \(S=I\), then problem (3.19) coincides with problem 3.1 in [32] and if \(\alpha_{n}, \mu_{n} =0\), then algorithm (3.20) reduces to algorithm (1.10).

     

4 Applications

In this section, by using the main results, we give some applications to the weak convergence of the produced algorithms for the equilibrium problem, zero point problem and convex minimization problem.

The equilibrium problem was formulated by Blum and Oettli [4] in 1994 for finding a point \(x^{*}\) such that
$$ F\bigl(x^{*},y\bigr) \geq0 \quad \text{for all } y \in C, $$
(4.1)
where \(F:C \times C \rightarrow {\mathbb {R}}\) is a bifunction. The solution set of equilibrium problem (4.1) is denoted by \(\operatorname{EP}(C,F)\).
In [4], we know that if F is a bifunction such that
  1. (A1)

    \(F(x,x)=0\) for all \(x \in C\);

     
  2. (A2)

    F is monotone, that is, \(F(x,y) + F(y,x) \leq0\) for all \(x, y \in C\);

     
  3. (A3)

    for each \(x, y, z \in C\), \(\limsup_{t\downarrow0} F(tz + (1 - t)x, y) \leq F(x, y)\);

     
  4. (A4)

    for each fixed \(x \in C\), \(y \mapsto F(x,y)\) is lower semicontinuous and convex,

     
then there exists \(z \in C\) such that
$$ F(z,y) + \frac{1}{r} \langle y - z, z - x\rangle\geq0,\quad \forall y \in C, $$
where r is a positive real number and \(x \in H\).
For \(r > 0\) and \(x\in H\), the resolvent \(T_{r} : H \rightarrow C\) of a bifunction F which satisfies conditions (A1)–(A4) is formulated as follows:
$$ T_{r}x=\biggl\{ z \in C : F(z,y) +\frac{1}{r} \langle y-z, z-x \rangle\geq0, \text{for all } y \in C\biggr\} \quad \text{for all } x \in H, $$
and has the following properties:
  1. (i)

    \(T_{r}\) is single-valued and firmly nonexpansive;

     
  2. (ii)

    \(\operatorname{Fix}(T_{r}) = \operatorname{EP}(C,F)\);

     
  3. (iii)

    \(\operatorname{EP}(C,F)\) is closed and convex.

     
For more details, see [13].

The following result is related to the equilibrium problems by applying Theorem 3.1.

Theorem 4.1

Let\(F : C \times C \rightarrow {\mathbb {R}}\)be a bifunction satisfying conditions (A1)(A4). Set\(\varGamma= \{z \in\operatorname{VI}(C, f)\cap \operatorname {Fix}(S) : Az \in\operatorname{EP}(C,F)\}\)and suppose that\(\varGamma\neq \emptyset\). Let the sequences\(\{x_{n}\}\), \(\{y_{n}\}\)and\(\{z_{n}\}\)be generated by\(x_{1} = x \in C\)and
$$ \textstyle\begin{cases} y_{n} = \mu_{n} x_{n}+(1-\mu_{n})P_{C}(x_{n} - \gamma_{n} A^{*}(I - T_{r})Ax_{n}), \\ z_{n} = P_{C}(y_{n} - \lambda_{n}f(y_{n})), \\ x_{n+1} = \alpha_{n} y_{n} + (1-\alpha_{n}) SP_{C}(y_{n} - \lambda_{n}f(z_{n})), \end{cases} $$
(4.2)
for each\(n \in {\mathbb {N}}\), where\(T_{r}\)is a resolvent ofFfor\(r > 0\). Then the sequence\(\{x_{n}\}\)converges weakly to a point\(z \in\varGamma \), where\(z = \lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\).

Proof

Since \(T_{r}\) is nonexpansive, the proof follows from Theorem 3.1 by taking \(T_{r}=T\). □

The following results are the application of Theorem 3.1 to the zero point problem.

Theorem 4.2

Let\(B : H_{2} \rightrightarrows{H_{2}}\)be a maximal monotone mapping with\(D(B) \neq\emptyset\). Set\(\varGamma= \{z \in\operatorname{VI}(C, f)\cap \operatorname{Fix}(S) : Az \in B^{-1}0\}\)and assume that\(\varGamma\neq \emptyset\). Let the sequences\(\{x_{n}\}\), \(\{y_{n}\}\)and\(\{z_{n}\}\)be generated by\(x_{1} = x \in C\)and
$$ \textstyle\begin{cases} y_{n} = \mu_{n} x_{n}+(1-\mu_{n})P_{C}(x_{n} - \gamma_{n} A^{*}(I - J_{r})Ax_{n}), \\ z_{n} = P_{C}(y_{n} - \lambda_{n}f(y_{n})), \\ x_{n+1} = \alpha_{n} y_{n} + (1-\alpha_{n}) SP_{C}(y_{n} - \lambda_{n}f(z_{n})), \end{cases} $$
(4.3)
for each\(n \in {\mathbb {N}}\), where\(J_{r}\)is a resolvent ofBfor\(r > 0\). Then the sequence\(\{x_{n}\}\)converges weakly to a point\(z \in\varGamma \), where\(z = \lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\).

Proof

Since \(J_{r}\) is firmly nonexpansive and \(\operatorname{Fix}(J_{r})=B^{-1}0\), the proof follows from Theorem 3.1 by taking \(J_{r}=T\). □

Theorem 4.3

Let\(B : H_{2}\rightrightarrows{H_{2}}\)be a maximal monotone mapping with\(D(B) \neq\emptyset\)and\(F : H_{2} \rightarrow H_{2}\)be aδ-inverse strongly monotone mapping. Set\(\varGamma= \{z \in\operatorname{VI}(C,f) \cap\operatorname{Fix}(S) : Az \in(B + F)^{-1}0\}\)and assume that\(\varGamma\neq\emptyset\). Let the sequences\(\{x_{n}\}\), \(\{y_{n}\}\)and\(\{z_{n}\}\)be generated by\(x_{1} = x \in C\)and
$$ \textstyle\begin{cases} y_{n} = \mu_{n} x_{n}+(1-\mu_{n})P_{C}(x_{n} - \gamma_{n} A^{*}(I - J_{r}(I-rF))Ax_{n}), \\ z_{n} = P_{C}(y_{n} - \lambda_{n}f(y_{n})), \\ x_{n+1} = \alpha_{n} y_{n} + (1-\alpha_{n}) SP_{C}(y_{n} - \lambda_{n}f(z_{n})), \end{cases} $$
(4.4)
for each\(n \in {\mathbb {N}}\), where\(J_{r}\)is a resolvent ofBfor\(r \in (0,2\delta)\). Then the sequence\(\{x_{n}\}\)converges weakly to a point\(z \in\varGamma\), where\(z = \lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\).

Proof

Since F is δ-inverse strongly monotone, then \(I-rF\) is nonexpansive. By the nonexpansiveness of \(J_{r}\), we obtain that \(J_{r}(I-rF)\) is also nonexpansive. We know that \(z \in(B+F)^{-1}0\) if and only if \(z=J_{r}(I-rF)z\). Thus the proof follows from Theorem 3.1 by taking \(J_{r}(I-rF)=T\). □

Let ϕ be a real-valued convex function from C to \({\mathbb {R}}\), the typical form of constrained convex minimization problem is finding a point \(x^{*} \in C\) satisfying
$$ \phi\bigl(x^{*}\bigr)=\min_{x \in C} \phi(x). $$
(4.5)
Denote the solution set of constrained convex minimization problem (4.5) by \(\arg\min_{x \in C} \phi(x)\).

By applying Theorem 3.3, we get the following result.

Theorem 4.4

Let\(\phi: H_{2} \rightarrow {\mathbb {R}}\)be a differentiable convex function and suppose thatϕis aδ-inverse strongly monotone mapping. Set\(\varGamma= \{z \in\operatorname{VI}(C,f) \cap\operatorname{Fix}(S) : Az \in\arg \min_{y\in Q} \phi(y)\}\)and assume that\(\varGamma\neq\emptyset\). Let the sequences\(\{x_{n}\}\), \(\{y_{n}\}\)and\(\{z_{n}\}\)be generated by\(x_{1} = x \in C\)and
$$ \textstyle\begin{cases} y_{n} = \mu_{n} x_{n}+(1-\mu_{n})P_{C}(x_{n} - \gamma_{n} A^{*}(I - P_{Q}(I-\theta \nabla\phi))Ax_{n}), \\ z_{n} = P_{C}(y_{n} - \lambda_{n}f(y_{n})), \\ x_{n+1} = \alpha_{n} y_{n} + (1-\alpha_{n}) SP_{C}(y_{n} - \lambda_{n}f(z_{n})), \end{cases} $$
(4.6)
for each\(n \in {\mathbb {N}}\), where\(\theta\in(0,2\delta)\). Then the sequence\(\{x_{n}\}\)converges weakly to a point\(z\in\varGamma\), where\(z = \lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\).

Proof

Since ϕ is convex, for each \(x,y \in C\), we have
$$ \phi\bigl(x+\lambda(z-x)\bigr) \leq(1-\lambda)\phi(x)+\lambda\phi(z)\quad \text{for all } \lambda\in(0,1). $$
It follows that \(\langle\nabla\phi(x),x-z\rangle\geq\phi(x) -\phi (z) \geq\langle\nabla\phi(z),x-z\rangle\). This implies that ∇ϕ is monotone. By Lemma 4.6 [32] and taking \(g= \nabla\phi\), the proof follows from Theorem 3.3. □

We obtain the following result for solving the split minimization problems and fixed point problems by applying Theorem 3.3.

Theorem 4.5

Let\(\phi_{1}: H_{1} \rightarrow {\mathbb {R}}\)and\(\phi_{2} : H_{2} \rightarrow {\mathbb {R}}\)be differentiable convex functions. Suppose that\(\nabla\phi_{1}\)is ak-Lipschitz continuous mapping and\(\nabla\phi_{2}\)isδ-inverse strongly monotone. Set\(\varGamma= \{z \in\operatorname{arg\,min}_{x\in C} \phi_{1}(x) \cap\operatorname {Fix}(S) : Az \in\operatorname{arg\,min}_{y\in Q} \phi_{2}(y)\}\)and assume that\(\varGamma\neq\emptyset\). Let the sequences\(\{x_{n}\}\), \(\{y_{n}\}\)and\(\{z_{n}\}\)be generated by\(x_{1} = x \in C\)and
$$ \textstyle\begin{cases} y_{n} = \mu_{n} x_{n}+(1-\mu_{n})P_{C}(x_{n} - \gamma_{n} A^{*}(I - P_{Q}(I-\theta \nabla\phi_{2}))Ax_{n}), \\ z_{n} = P_{C}(y_{n} - \lambda_{n}\nabla\phi_{1}(y_{n})), \\ x_{n+1} = \alpha_{n} y_{n} + (1-\alpha_{n}) SP_{C}(y_{n} - \lambda_{n}\nabla \phi_{1}(z_{n})), \end{cases} $$
(4.7)
for each\(n \in {\mathbb {N}}\), where\(\theta\in(0,2\delta)\). Then the sequence\(\{x_{n}\}\)converges weakly to a point\(z \in\varGamma\), where\(z = \lim_{n \rightarrow \infty} P_{\varGamma}x_{n}\).

Proof

The convexity of \(\phi_{1}\) implies that \(\nabla\phi_{1}\) is monotone. The result follows from Lemma 4.6 [32] by taking \(f = \nabla \phi_{1}\) and \(g = \nabla\phi_{2}\) in Theorem 3.3. □

Notes

Acknowledgements

The first author is thankful to the Science Achievement Scholarship of Thailand. We would like express our deep thanks to the Department of Mathematics, Faculty of Science, Naresuan University for the support.

Authors’ contributions

All authors contributed equally to the work. All authors read and approved the final manuscript.

Funding

The research was supported by the Science Achievement Scholarship of Thailand and Naresuan University.

Competing interests

The authors declare that they have no competing interests.

References

  1. 1.
    Alghamdi, M.A., Shahzad, N., Zegeye, H.: On solutions of variational inequality problems via iterative methods. Abstr. Appl. Anal. 2014, Article ID 424875 (2014) MathSciNetGoogle Scholar
  2. 2.
    Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011) CrossRefGoogle Scholar
  3. 3.
    Billups, S.C., Murty, K.G.: Complementarity problems. J. Comput. Appl. Math. 124, 303–318 (2000) MathSciNetCrossRefGoogle Scholar
  4. 4.
    Blum, E., Oettli, W.: From optimization and variational inequalities to equilibrium problems. Math. Stud. 63, 123–145 (1994) MathSciNetzbMATHGoogle Scholar
  5. 5.
    Byrne, C.: Iterative oblique projection onto convex sets and the split feasibility problem. Inverse Probl. 18, 441–453 (2002) MathSciNetCrossRefGoogle Scholar
  6. 6.
    Byrne, C.: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Probl. 20, 103–120 (2004) MathSciNetCrossRefGoogle Scholar
  7. 7.
    Byrne, C., Censor, Y., Gibali, A., Reich, S.: Weak and strong convergence of algorithms for the split common null point problem. J. Nonlinear Convex Anal. 13, 759–775 (2012) MathSciNetzbMATHGoogle Scholar
  8. 8.
    Ceng, L.C., Ansari, Q.H., Yao, J.C.: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286–5302 (2011) MathSciNetCrossRefGoogle Scholar
  9. 9.
    Censor, Y., Bortfeld, T., Martin, B., Trofimov, A.: A unified approach for inversion problems in intensity modulated radiation therapy. Phys. Med. Biol. 51, 2353–2365 (2006) CrossRefGoogle Scholar
  10. 10.
    Censor, Y., Elfving, T.: A multiprojection algorithm using Bregman projections in a product space. Numer. Algorithms 8, 221–239 (1994) MathSciNetCrossRefGoogle Scholar
  11. 11.
    Censor, Y., Gibali, A., Reich, S.: Algorithms for the split variational inequality problem. Numer. Algorithms 59, 301–323 (2012) MathSciNetCrossRefGoogle Scholar
  12. 12.
    Cho, S.Y., Qin, X., Yao, J.C., Yao, Y.: Viscosity approximation splitting methods for monotone and nonexpansive operators in Hilbert spaces. J. Nonlinear Convex Anal. 19, 251–264 (2018) MathSciNetzbMATHGoogle Scholar
  13. 13.
    Combettes, P.L., Hirstoaga, S.A.: Equilibrium programming in Hilbert spaces. J. Nonlinear Convex Anal. 6, 117–136 (2005) MathSciNetzbMATHGoogle Scholar
  14. 14.
    Eckstein, J., Bertsekas, D.P.: On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 55, 293–318 (1992) MathSciNetCrossRefGoogle Scholar
  15. 15.
    Fang, N.N., Gong, Y.P.: Viscosity iterative methods for split variational inclusion problems and fixed point problems of a nonexpansive mapping. Commun. Optim. Theory 2016, Article ID 11 (2016) Google Scholar
  16. 16.
    Gibali, A.: Two simple relaxed perturbed extragradient methods for solving variational inequalities in Euclidean spaces. J. Nonlinear Var. Anal. 2, 49–61 (2018) CrossRefGoogle Scholar
  17. 17.
    Hartman, P., Stampacchia, G.: On some non-linear elliptic differential-functional equations. Acta Math. 115, 271–310 (1966) MathSciNetCrossRefGoogle Scholar
  18. 18.
    Kim, J.K., Salahuddin: A system of nonconvex variational inequalities in Banach spaces. Commun. Optim. Theory 2016, Article ID 20 (2016) Google Scholar
  19. 19.
    Korpelevich, G.M.: An extragradient method for finding saddle points and for other problems. Èkon. Mat. Metody 12, 747–756 (1976) MathSciNetzbMATHGoogle Scholar
  20. 20.
    Mancino, O.G., Stampacchia, G.: Convex programming and variational inequalities. J. Optim. Theory Appl. 9(1), 3–23 (1972) MathSciNetCrossRefGoogle Scholar
  21. 21.
    Qin, X., Cho, S.Y., Wang, L.: Strong convergence of an iterative algorithm involving nonlinear mappings of nonexpansive and accretive type. Optimization 67, 1377–1388 (2018).  https://doi.org/10.1080/02331934.2018.1491973 MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Qin, X., Yao, J.C.: Projection splitting algorithms for nonself operators. J. Nonlinear Convex Anal. 18, 925–935 (2017) MathSciNetzbMATHGoogle Scholar
  23. 23.
    Schu, J.: Weak and strong convergence to fixed points of asymptotically nonexpansive mappings. Bull. Aust. Math. Soc. 43, 153–159 (1991) MathSciNetCrossRefGoogle Scholar
  24. 24.
    Stampacchia, G.: Formes bilineaires coercivites sur les ensembles convexes. C. R. Acad. Sci. Paris 258, 4413–4416 (1964) MathSciNetzbMATHGoogle Scholar
  25. 25.
    Suwannaprapa, M., Petrot, N., Suantai, S.: Weak convergence theorems for split feasibility problems on zeros of the sum of monotone operators and fixed point sets in Hilbert spaces. Fixed Point Theory Appl. 2017, 6 (2017) MathSciNetCrossRefGoogle Scholar
  26. 26.
    Takahashi, W.: Introduction to Nonlinear and Convex Analysis. Yokohama Publishers, Yokohama (2009) zbMATHGoogle Scholar
  27. 27.
    Takahashi, W., Nadezhkina, N.: Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 128, 191–201 (2006) MathSciNetCrossRefGoogle Scholar
  28. 28.
    Takahashi, W., Toyoda, M.: Weak convergence theorem for nonexpansive mappings and monotone mappings. J. Optim. Theory Appl. 118, 417–428 (2003) MathSciNetCrossRefGoogle Scholar
  29. 29.
    Takahashi, W., Wen, C.F., Yao, J.C.: An implicit algorithm for the split common fixed point problem in Hilbert spaces and applications. Appl. Anal. Optim. 1, 423–439 (2017) MathSciNetGoogle Scholar
  30. 30.
    Takahashi, W., Xu, H.K., Yao, J.C.: Iterative methods for generalized split feasibility problems in Hilbert spaces. Set-Valued Var. Anal. 23(2), 205–221 (2015) MathSciNetCrossRefGoogle Scholar
  31. 31.
    Tian, M., Jiang, B.N.: Weak convergence theorem for variational inequality problems with monotone mapping in Hilbert spaces. J. Inequal. Appl. 2016, 286 (2016) MathSciNetCrossRefGoogle Scholar
  32. 32.
    Tian, M., Jiang, B.N.: Weak convergence theorem for a class of split variational inequality problems and applications in a Hilbert space. J. Inequal. Appl. 2017, 123 (2017) MathSciNetCrossRefGoogle Scholar
  33. 33.
    Xu, H.K.: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 298, 279–291 (2004) MathSciNetCrossRefGoogle Scholar
  34. 34.
    Xu, H.K.: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Probl. 26, 105018 (2010) MathSciNetCrossRefGoogle Scholar
  35. 35.
    Xu, H.K.: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 150, 360–378 (2011) MathSciNetCrossRefGoogle Scholar
  36. 36.
    Yao, Y., Marino, G., Liou, Y.C.: A hybrid method for monotone variational inequalities involving pseudocontractions. Fixed Point Theory Appl. 2011, 180534 (2011) MathSciNetzbMATHGoogle Scholar
  37. 37.
    Yao, Y.H., Agarwal, R.P., Postolache, M., Liou, Y.C.: Algorithms with strong convergence for the split common solution of the feasibility problem and fixed point problem. Fixed Point Theory Appl. 2014, 183 (2014) MathSciNetCrossRefGoogle Scholar
  38. 38.
    Yao, Y.H., Liou, Y.C., Yao, J.C.: Split common fixed point problem for two quasi-pseudocontractive operators and its algorithm construction. Fixed Point Theory Appl. 2015, 127 (2015) CrossRefGoogle Scholar
  39. 39.
    Yao, Y.H., Liou, Y.C., Yao, J.C.: Iterative algorithms for the split variational inequality and fixed point problems under nonlinear transformations. J. Nonlinear Sci. Appl. 10, 843–854 (2017) MathSciNetCrossRefGoogle Scholar
  40. 40.
    Yuan, H.: A splitting algorithm in a uniformly convex and 2-uniformly smooth Banach space. J. Nonlinear Funct. Anal. 2018, Article ID 26 (2018) Google Scholar
  41. 41.
    Zegeye, H., Shahzad, N., Yao, Y.H.: Minimum-norm solution of variational inequality and fixed point problem in Banach spaces. Optimization 64, 453–471 (2015) MathSciNetCrossRefGoogle Scholar
  42. 42.
    Zeng, L.C., Yao, J.C.: Strong convergence theorems for fixed point problems and variational inequality problems. Taiwan. J. Math. 10(5), 1293–1303 (2006) CrossRefGoogle Scholar
  43. 43.
    Zhou, H., Zhou, Y., Feng, G.: Iterative methods for solving a class of monotone variational inequality problems with applications. J. Inequal. Appl. 2015, 68 (2015) MathSciNetCrossRefGoogle Scholar

Copyright information

© The Author(s) 2018

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors and Affiliations

  • Panisa Lohawech
    • 1
  • Anchalee Kaewcharoen
    • 1
    Email author
  • Ali Farajzadeh
    • 2
  1. 1.Department of Mathematics, Faculty of ScienceNaresuan UniversityPhitsanulokThailand
  2. 2.Department of Mathematics, Faculty of ScienceRazi UniversityKermanshahIran

Personalised recommendations