Abstract
We propose in this article a new inertial hybrid gradient method with self-adaptive step size for approximating a common solution of variational inequality and fixed point problems for an infinite family of relatively nonexpansive multivalued mappings in Banach spaces. Unlike in many existing hybrid gradient methods, here the projection onto the closed convex set is replaced with projection onto some half-space which can easily be implemented. We incorporate into the proposed algorithm inertial term and self-adaptive step size which help to accelerate rate of convergence of iterative schemes. Moreover, we prove a strong convergence theorem without the knowledge of the Lipschitz constant of the monotone operator and we apply our result to find a common solution of constrained convex minimization and fixed point problems in Banach spaces. Finally, we present a numerical example to demonstrate the efficiency of our algorithm in comparison with some recent iterative methods in the literature.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Let E be a real Banach space with norm \(||\cdot ||\) and \(E^*\) be the dual of E. For \(f\in E^*\) and \(x\in E,\) let \(\langle x,f \rangle \) be the value of f at x. Suppose that C is a nonempty closed and convex subset of E and \(A:E\rightarrow E^*\) is a single-valued mapping. The Variational Inequality Problem (VIP) associated with C and A is formulated as follows: Find a point \(x^*\in C\) such that
The solution set of VIP (1.1) is denoted by VI(C, A). Variational inequality theory has emerged as an interesting and fascinating branch of applicable mathematics with a wide range of applications in finance, economics, engineering, medicine, mathematical programming, partial differential equations, minimization problems, optimal control problems, etc., (see, for example, [34, 46]) and the references therein. This field is dynamic and is experiencing an impressive growth in both theory and applications; hence, research techniques and problems are drawn from various fields. It has been shown that this theory provides the most natural, direct, simple, unified and efficient framework for a general treatment of a wide class of unrelated linear and nonlinear problems, see, for example [2, 3] and the references therein. Various iterative methods for solving VIP (1.1) and related optimization problems in Hilbert and Banach have been proposed and analyzed by several authors, (see, for example, [4, 12,13,14, 18, 42]).
In a real Hilbert space, a simple iterative algorithm for solving VIP (1.1) is the gradient method, which is presented as follows:
Algorithm 1.1
(Gradient Method (GM))
Ii is known that if A is strongly monotone and L-Lipschitz continuous on C, then there exists a unique solution of VIP (1.1), and the sequence \(\{x_n\}\) generated by Algorithm 1.1 converges strongly to the solution under suitable conditions. However, if A is inverse strongly monotone, then \(\{x_n\}\) generated by Algorithm 1.1 converges weakly to a solution of VIP (1.1) under certain conditions.
In order to relax the monotonicity condition on the cost operator A, Korpelevich [26] introduced the following extragradient method in a finite dimensional Euclidean space \({\mathbb {R}}^n:\)
Algorithm 1.2
(Extragradient Method (EgM))
where \(C\subseteq {\mathbb {R}}^n, A:C\rightarrow {\mathbb {R}}^n\) is a monotone and L-Lipschitz continuous operator with \(\lambda \in (0, \frac{1}{L}).\) If the solution set VI(C, A) is nonempty, then the sequence \(\{x_n\}\) generated by EgM converges to an element in VI(C, A). The EgM was further extended to infinite dimensional spaces by many authors. Observe that the EgM involves two projections onto the closed convex set C and two evaluations of A per iteration. Computing projection onto an arbitrary closed convex set is a difficult task, a drawback which may affect the efficiency of the EgM as mentioned in [8]. Hence, a major improvement on the EgM is to minimize the number of evaluations of \(P_C\) per iteration. Censor et al. [8] initiated an attempt in this direction, modifying the EgM by replacing the second projection with a projection onto a half-space. This new method involves only one projection onto C and is called the subgradient extragradient method (SEgM).
Observe that, all the above results about the gradient method, extragradient method or subgradient extragradient method are all confined in Hilbert spaces. However, many important problems related to practical problems are generally defined in Banach spaces. Hence, it is more desirable to propose an iterative algorithm for finding a solution of VIP (1.1) in Banach spaces.
In 2008, Iiduka and Takahashi [22] proved the following theorem in Banach space setting.
Theorem 1.3
Let E be a 2-uniformly smooth Banach space whose duality mapping J is weakly sequentially continuous, and C be a nonempty, closed and convex subset of E. Assume that A is an operator of C into \(E^*\) that satisfies:
-
(A1)
A is \(\alpha -\)inverse-strongly-monotone;
-
(A2)
\(VI(C,A)\ne \emptyset ;\)
-
(A3)
\(||Ay||\le ||Ay-Au||\) for all \(y\in C\) and \(u\in VI(C,A).\)
Suppose that \(x_1=x\in C\) and \(\{x_n\}\) is given by
for every \(n=1,2,\ldots ,\) where \(\{\lambda _n\}\) is a sequence of positive numbers. If \(\{\lambda _n\}\) is chosen so that \(\lambda _n\in [a,b]\) for some a, b with \(0<a<b<c_1\alpha ,\) then the sequence \(\{x_n\}\) converges weakly to some element \(z\in VI(C,A),\) where \(c_1\) is the 2-uniformly convexity constant of E. Further \(z=\lim _{n\rightarrow \infty }\Pi _{VI(C,A)}(x_n).\)
We identify the following drawbacks with Theorem 1.3 above:
- (P1):
-
The condition (A3) is too restrictive
- (P2):
-
The algorithm is only restricted to the class of inverse strongly monotone mappings.
- (P3):
-
The Theorem requires that the Banach space E has the weakly sequentially continuous duality mapping, which is rather a too strong an hypothesis.
- (P4):
-
The sequence \(\{x_n\}\) generated by the iterative scheme only gives weak convergence result.
In optimization theory, strong convergence results are more useful and desirable than weak convergence results. Therefore, it is necessary to develop algorithms which generate sequences that converge strongly to the solution of the problem under consideration.
Let \(S: E\rightarrow E\) be a nonlinear mapping, a point \(x^*\in E\) is called a fixed point of S if \(Sx^* = x^*.\) We denote by F(S), the set of all fixed points of S, i.e.
If S is a multivalued mapping, i.e. \(S : E \rightarrow 2^{E},\) then \(x^*\in E\) is called a fixed point of S if
The fixed point theory for multivalued mappings have applications in various fields of research such as game theory, control theory, mathematical economics, etc.
In this article, we are interested in studying the problem of finding a common solution of both the VIP (1.1) and the common fixed point problem for multivalued mappings in Banach spaces. The importance and motivation for studying the VIP and common fixed point problems lies in its potential application to mathematical models whose constraints can be expressed as fixed point problem and VIP. This happens, in particular, in practical problems such as signal processing, network resource allocation, image recovery. A scenario is in network bandwidth allocation problem for two services in a heterogeneous wireless access networks in which the bandwidth of the services are mathematically related (see, for instance, [21, 30] and the references therein).
In order to overcome the identified drawbacks in Theorem 1.3 above, Nakajo in [33] proposed the following hybrid method:
Algorithm 1.4
where E is a 2-uniformly convex and uniformly smooth Banach space and A is monotone and Lipschitz continuous. It was proved that the sequence \(\{x_n\}\) generated by Algorithm 1.4 converges strongly to \(\Pi _Dx,\) where \(D = VI(C, A)\cap F(T)\) and T is a relatively nonexpansive mapping.
We observe that the drawbacks (P1)-(P4) pointed out in Theorem 1.3 above have been successfully addressed in [33]. However, we note that the Algorithm requires making a projection onto the closed convex set C per iteration, which is one of the drawbacks earlier identified in the gradient, extragradient and subgradient extragradient methods. Another shortcoming of Algorithm 1.4 and the other algorithms stated above is that the step size is defined by a constant (or a sequence) which is dependent on the Lipschitz constant of the monotone operator. The Lipschitz constant is typically assumed to be known, or at least estimated priorly. However, in many instances, this parameter is unknown or difficult to estimate. Moreover, the step size defined by this method is often very small and deteriorates the rate of convergence of an algorithm.
In [31], Mainge extended and unified the Krasnosel’skii-Mann algorithm as follows:
Algorithm 1.5
for each \(n\ge 1\) and proved a weak convergence for a nonexpansive mapping T under some conditions.
The term \(\theta _n(x_n-x_{n-1})\) incorporated in Algorithm 1.5 is called the inertial term. It plays a crucial role in accelerating the rate of convergence of iterative schemes; for details see [21, 29, 38]. Recently, several researchers have constructed some fast algorithms by employing inertial technique in Hilbert and Banach spaces (see, e.g., [2, 6, 10, 11, 15, 24, 35, 39, 43, 45]).
Therefore, in this paper inspired and motivated by the cited works, we introduce an inertial hybrid gradient method with self-adaptive step size for approximating a common solution of VIP (1.1) and common fixed point problem for an infinite family of relatively nonexpansive multivalued mappings. In our proposed algorithm, the projection onto the closed convex set C in Algorithm 1.4 is replaced with a projection onto some half-space, which is easier to implement. Moreover, we prove a strong convergence theorem without prior knowledge of the Lipschitz constant of the monotone operator in a uniformly smooth and 2-uniformly convex Banach spaces and apply our result to find a common solution of constrained convex minimization problem and common fixed point problem in Banach spaces. Finally, we present a numerical example to demonstrate the efficiency of our algorithm in comparison with some recent iterative methods in the literature.
2 Preliminaries
In what follows, we let \({\mathbb {N}}\) and \({\mathbb {R}}\) represent the set of positive integers and real numbers, respectively. Let E be a Banach space, \(E^*\) be the dual space of E, and let \(\langle \cdot , \cdot \rangle \) denote the duality pairing of E and \(E^*.\) When \(\{x_n\}\) is a sequence in E, we denote the strong convergence of \(\{x_n\}\) to \(x\in E\) by \(x_n\rightarrow x\) and the weak convergence by \(x_n\rightharpoonup x.\) An element \(z\in E\) is called a weak cluster point of \(\{x_n\}\) if there exists a subsequence \(\{x_{n_j}\}\) of \(\{x_n\}\) converging weakly to z. We write \(w_\omega (x_n)\) to indicate the set of all weak cluster points of \(\{x_n\}.\)
Now, we define some operators which are employed in the article.
Definition 2.1
[23] An operator \(A:E\rightarrow E^*\) is said to be
-
(i)
monotone if
$$\begin{aligned} \langle x-y, Ax-Ay \rangle \ge 0,\quad \forall ~ x,y\in E; \end{aligned}$$ -
(ii)
\(\alpha \)-inverse-strongly-monotone if there exists a positive real number \(\alpha \) such that
$$\begin{aligned} \langle x-y, Ax-Ay \rangle \ge \alpha ||Ax-Ay||^2,\quad \forall ~ x,y\in E; \end{aligned}$$ -
(iii)
L-Lipschitz continuous if there exists a constant \(L>0\) such that
$$\begin{aligned} ||Ax-Ay||\le L||x-y|| ,\quad \forall ~ x,y\in E. \end{aligned}$$
It is obvious that an \(\alpha \)-inverse-strongly-monotone mapping is monotone and \(\frac{1}{\alpha }\) -Lipschitz continuous. But, the converse is not always true.
Definition 2.2
A function \(f:E\rightarrow {\mathbb {R}}\) is said to be weakly lower semicontinuous (w-lsc) at \(x\in E,\) if
holds for an arbitrary sequence \(\{x_n\}_{n=0}^\infty \) in E satisfying \(x_n\rightharpoonup x.\)
Let \(g:E\rightarrow {\mathbb {R}}\) be a function. The subdifferential of g at x is defined by
If \(\partial g(x)\ne \emptyset ,\) then we say that g is subdifferentiable at x.
A Banach space E is said to be strictly convex, if for all \(x,y\in E\) such that \(||x||=||y||=1\) and \(x\ne y\) implies \(||(x+y)/2||<1.\) E is said to be uniformly convex, if for each \(\varepsilon \in (0,2],\) there exists \(\delta >0\) such that for all \(x,y\in E, ||x||=||y||=1\) and \(||x-y||\ge \varepsilon \) implies \(||(x+y)/2||<1-\delta .\) It is a well known result that a uniformly convex Banach space is strictly convex and reflexive. The modulus of convexity of E is defined as
Then, E is uniformly convex if and only if \(\delta _E(\varepsilon )>0\) for all \(\varepsilon \in (0,2],\) see [40, 41]. In particular, let H be a real Hilbert space, then \(\delta _H(\varepsilon )=1-\sqrt{1-(\varepsilon /2)^2}\). E is said to be p-uniformly convex if there exists a constant \(c>0\) such that \(\delta _E(\varepsilon )>c\varepsilon ^p\) for all \(\varepsilon \in [0,2]\) with \(p\ge 2.\) It is easy to see that a p-uniformly convex Banach space is uniformly convex. In particular, a Hilbert space is 2-uniformly convex.
A Banach space E is said to be smooth, if the limit \(\lim _{t\rightarrow 0}(||x+ty||-||x||)/t\) exists for all \(x,y\in S_E,\) where \(S_E=\{x\in E: ||x||=1\}.\) Moreover, if this limit is attained uniformly for \(x,y\in S_E,\) then E is said to be uniformly smooth. Clearly, a uniformly smooth space is smooth. In particular, a Hilbert space is uniformly smooth.
For \(p>1,\) the generalized duality mapping \(J_p:E\rightarrow 2^{E^*}\) is defined by
In particular, \(J = J_2\) is called the normalized duality mapping. If \(E = H,\) where H is a real Hilbert space, then \(J = I.\) The normalized duality mapping J has the following properties [40, 41]:
-
(1)
if E is smooth, then J is single-valued;
-
(2)
if E is strictly convex, then J is one-to-one and strictly monotone;
-
(3)
if E is reflexive, then J is surjective;
-
(4)
if E is uniformly smooth, then J is uniformly norm-to-norm continuous on each bounded subset of E.
Let E be a smooth Banach space. The Lyapunov functional \(\phi :E\times E\rightarrow {\mathbb {R}}\) is defined by
From the definition, it is easy to see that \(\phi (x,x)=0\) for every \(x\in E.\) If E is strictly convex, then \(\phi (x,y)=0\Leftrightarrow x=y.\) If E is a Hilbert space, it is easy to see that \(\phi (x,y)=||x-y||^2\) for all \(x,y\in E.\) Moreover, for every \(x,y,z\in E\) and \(\alpha \in (0,1),\) the Lyapunov functional \(\phi \) satisfies the following properties:
-
(P1)
\(0\le (||x|| - ||y||)^2\le \phi (x,y)\le (||x|| + ||y||)^2;\)
-
(P2)
\(\phi (x, J^{-1}(\alpha Jz + (1-\alpha )Jy))\le \alpha \phi (x,z) + (1-\alpha )\phi (x,y);\)
-
(P3)
\(\phi (x,y) = \phi (x,z) + \phi (z,y) + 2\langle z-x, Jy-Jz \rangle ;\)
-
(P4)
\( \phi (x,y)\le 2\langle y-x, Jy-Jx \rangle ;\)
-
(P5)
\(\phi (x,y) = \langle x, Jx-Jy \rangle + \langle y-x, Jy \rangle \le ||x||||Jx-Jy|| +||y-x||||y||.\)
Also, we define the functional \(V:E\times E^*\rightarrow [0, +\infty )\) by
It can be deduced from (2.2) that V is non-negative and
Definition 2.3
Let C be a nonempty closed convex subset of a real Banach space E. A point \(p\in C\) is called an asymptotic fixed point of T if C contains a sequence \(\{x_n\}\) which converges weakly to p such that \(\lim _{n\rightarrow \infty }||x_n-Tx_n||=0.\) We denote the set of asymptotic fixed points of T by \(\hat{F}(T).\)
A mapping \(T:C\rightarrow C\) is said to be:
-
1.
relatively nonexpansive (see [27]) if:
-
i.
\(F(T)\ne \emptyset ;\)
-
ii.
\(\phi (p, Tx)\le \phi (p, x)\quad \forall p\in F(T),~ x\in C;\)
-
iii.
\(\hat{F}(T)=F(T);\)
-
i.
-
2.
generalized nonspreading [20] if there are \(\alpha ,\beta ,\gamma ,\delta \in {\mathbb {R}}\) such that
$$\begin{aligned}{} & {} \alpha \phi (Tx,Ty)+(1-\alpha )\phi (x,Ty) + \gamma [\phi (Ty,Tx)-\phi (Ty,x)]\\{} & {} \quad \le \beta \phi (Tx,y)+(1-\beta )\phi (x,y) +\delta [\phi (y,Tx)-\phi (y,x)]. \end{aligned}$$
Let N(C) and CB(C) denote the family of nonempty subsets and nonempty closed bounded subsets of C, respectively. The Hausdorff metric on CB(C) is defined by
for all \(A,B\in CB(C),\) where \(\text {dist}(a,B):=\inf \{||a-b||:b\in B\}.\)
Let \(T : C\rightarrow CB(C)\) be a multivalued mapping. An element \(p\in C\) is called a fixed point of T if \(p\in Tp.\) A point \(p\in C\) is called an asymptotic fixed point of T, if there exists a sequence \(\{x_n\}\) in C which converges weakly to p such that \(\lim _{n\rightarrow \infty }\text {dist}(x_n,Tx_n)=0.\)
A mapping \(T:C\rightarrow CB(C)\) is said to be relatively nonexpansive if:
-
i.
\(F(T)\ne \emptyset ;\)
-
ii.
\(\phi (p, u)\le \phi (p, x)\quad \forall ~ u\in Tx, p\in F(T);\)
-
iii.
\(\hat{F}(T)=F(T).\)
The class of relatively nonexpansive multivalued mappings contains the class of relatively nonexpansive single-valued mappings.
Remark 2.4
(See [19]) Let E be a strictly convex and smooth Banach space, and C a nonempty closed convex subset of E. Suppose \(T:C\rightarrow N(C)\) is a relatively nonexpansive multi-valued mapping. If \(p\in F(T),\) then \(Tp =\{p\}.\)
Lemma 2.5
[25] Let E be a smooth and uniformly convex Banach space, and \(\{x_n\}\) and \(\{y_n\}\) be sequences in E such that either \(\{x_n\}\) or \(\{y_n\}\) is bounded. If \(\phi (x_n,y_n)\rightarrow 0\) as \(n\rightarrow \infty ,\) then \(||x_n-y_n||\rightarrow 0\) as \(n\rightarrow \infty .\)
Remark 2.6
From property (P4) of the Lyapunov functional, it follows that the converse of Lemma 2.5 also holds if the sequences \(\{x_n\}\) and \(\{y_n\}\) are bounded (see also, [47])
Lemma 2.7
[22] Let C be a nonempty closed convex subset of a reflexive, strictly convex, and smooth Banach space E. Given \(x\in E\) and \(z\in C.\) Then, \(z=\Pi _Cx\) implies
Lemma 2.8
[22] Let p be a real number with \(p \ge 2.\) Then, E is p-uniformly convex if and only if there exists \(c\in (0,1]\) such that
Here, the best constant 1/c is called the p-uniformly convexity constant of E.
Lemma 2.9
[33] Let E be a 2-uniformly convex and smooth Banach space. Then, for every \(x,y\in E, \phi (x,y)\ge \frac{c^2}{2}||x-y||^2, \) where \(\frac{1}{c}\) is the 2-uniformly convexity constant of E.
Lemma 2.10
[22, 32] Let C be a nonempty closed and convex subset of a smooth Banach space E and \(x\in E.\) Then, \(x_0=\Pi _Cx\) if and only if
Lemma 2.11
[5] Let E be a reflexive strictly convex and smooth Banach space with \(E^*\) as its dual. Then,
for all \(x\in E\) and \(x^*,y^*\in E^*.\)
Lemma 2.12
[19] Let E be a strictly convex and smooth Banach space and C be a nonempty closed convex subset of E. Let \(T:C\rightarrow N(C)\) be a relatively nonexpansive multi-valued mapping. Then F(T) is closed and convex.
The following Lemma shows the relationship between generalized nonspreading mappings and relatively nonexpansive mappings.
Lemma 2.13
[20] Let E be a strictly convex Banach space with a uniformly Gâteaux differentiable norm, let C be a nonempty closed convex subset of E and let T be a generalized nonspreading mapping of C into itself such that \(F(T)\ne \emptyset .\) Then, T is relatively nonexpansive.
An operator A of C into \(E^*\) is said to be hemicontinuous if for all \(x,y\in C,\) the mapping f of [0, 1] into \(E^*\) defined by \(f(t)=A(tx+(1-t)y)\) is continuous with respect to the weak\(^*\)-topology of \(E^*.\)
Lemma 2.14
[22] Let C be a nonempty, closed and convex subset of a Banach space E and A a monotone, hemicontinuous operator of C into \(E^*.\) Then
It is obvious from Lemma 2.14 that the set VI(C, A) is a closed and convex subset of C.
Lemma 2.15
[9] Let E be a uniformly convex Banach space, \(r>0\) be a positive number and \(B_r(0)\) be a closed ball of E. Then, for any given sequence \(\{x_i\}_{i=1}^\infty \subset B_r(0)\) and for any given sequence \(\{\lambda _i\}_{i=1}^\infty \) of positive number with \(\sum _{n=1}^\infty \lambda _n=1,\) there exists a continuous, strictly increasing, and convex function \(g:[0, 2r)\rightarrow [0, \infty )\) with \(g(0)=0\) such that, for any positive integer i, j with \(i<j,\)
Lemma 2.16
[2] Let \(\{a_n\}\) be a sequence of nonnegative real numbers, \(\{\alpha _n\}\) be a sequence in (0, 1) with \(\sum _{n=1}^\infty \alpha _n = \infty \) and \(\{b_n\}\) be a sequence of real numbers. Assume that
If \(\limsup _{k\rightarrow \infty }b_{n_k}\le 0\) for every subsequence \(\{a_{n_k}\}\) of \(\{a_n\}\) satisfying \(\liminf _{k\rightarrow \infty }(a_{n_{k+1}} - a_{n_k})\ge 0,\) then \(\lim _{n\rightarrow \infty }a_n =0.\)
Lemma 2.17
[22, 48] Let E be a p-uniformly convex Banach space with \(p\ge 2.\) Then
where \(\frac{1}{c}\) is the p-uniformly convexity constant.
3 Main results
In this section, we present our algorithm and prove some strong convergence results for the proposed algorithm. We establish the convergence of the algorithm under the following conditions:
Condition A:
-
(A1)
E is a 2-uniformly convex and uniformly smooth Banach space with the 2-uniformly convexity constant \(\frac{1}{c};\)
-
(A2)
C is a nonempty closed convex set, which satisfies the following conditions:
$$\begin{aligned} C = \{x\in E: g(x)\le 0\}, \end{aligned}$$where \(g:E\rightarrow {\mathbb {R}}\) is a convex function;
-
(A3)
g(x) is weakly lower semicontinuous on E;
-
(A4)
For any \(x\in E,\) at least one subgradient \(\xi \in \partial g(x)\) can be calculated (i.e. g is subdifferentiable on E), where \(\partial g(x)\) is defined as follows:
$$\begin{aligned} \partial g(x) = \{z\in E^*: h(y)\ge h(x) + \langle y-x, z \rangle , \quad \forall ~ y\in E \}. \end{aligned}$$In addition, \(\partial g(x)\) is bounded on bounded sets.
Condition B:
-
(B1)
The solution set denoted by \(\Omega =VI(C,A)\cap \bigcap _{i=1}^{\infty }F(S_i)\) is nonempty, where \(S_i:E\rightarrow CB(E)\) is an infinite family of relatively nonexpnsive multivalued mappings;
-
(B2)
The mapping \(A:E\rightarrow E^*\) is monotone and Lipschitz continuous with Lipschitz constant \(L>0.\)
Condition C:
-
(C1)
\( \{\alpha _n\}\) is a bounded sequence of real numbers;
-
(C2)
\(\{\beta _{n,i}\}\subset [a,b]\subset (0,1)\) for some \(a,b\in (0,1), \sum _{i=0}^\infty \beta _{n,i}=1,\) and \(\liminf _{n\rightarrow }\beta _{n,0}\beta _{n,i}>0\) for all \(i\ge 1;\)
-
(C3)
\(\lambda _1>0,\) \(\mu \in (0,1).\)
We now present the algorithm as follows:
Remark 3.2
The advantage of our proposed method is that the projection in our method is onto some half-space and so easier to implement than the projection onto the closed convex set C in Algorithm 1.4 and some other existing methods. In addition, we establish the strong convergence of the sequence of iterates generated by our method without the prior knowledge of the Lipschitz constant of the monotone operator.
Remark 3.3
From the construction of the half-space \(C_n,\) it can easily be verified that \(C\subseteq C_n.\)
Remark 3.4
The sequence \(\{\lambda _n\}\) is a monotonically decreasing sequence with lower bound \(\min \{\frac{\mu }{L}, \lambda _1\},\) and hence, the limit of \(\{\lambda _n\}\) exists and is denoted by \(\lambda =\lim _{n\rightarrow \infty }\lambda _n.\) It is clear that \(\lambda >0.\)
Next, we prove the following lemma which will be utilized in establishing the strong convergence theorem for our proposed algorithm.
Lemma 3.5
Let \(\{x_n\}, \{w_n\}\) and \(\{y_n\}\) be sequences generated by Algorithm 3.1, and suppose \(\{x_n\}\) is bounded and \(\lim _{n\rightarrow \infty } ||w_n-y_n||=0.\) Let \(\{w_{n_k}\}\) be a subsequence of \(\{w_n\},\) which converges weakly to some \(\hat{x}\in E\) as \(k\rightarrow \infty ,\) then \(\hat{x}\in VI(C,A).\)
Proof
Since \(w_{n_k}\rightharpoonup \hat{x},\) then by the hypothesis of the Lemma we have that \(y_{n_k}\rightharpoonup \hat{x}.\) Also, since \(y_{n_k}\in C_{n_k},\) it follows from the definition of \(C_n\) that
Since \(\{x_n\}\) is bounded, then by the construction of the Algorithm we have that \(\{w_n\}\) and \(\{y_n\}\) are also bounded. Consequently, by condition (A4) there exists a constant \(M>0\) such that \(||\xi _{n_k}||\le M\) for all \(k\ge 0.\) Combining this with (3.5), we obtain
By condition (A3), we have that
Hence, it follows from condition (A2) that \(\hat{x}\in C.\) By Lemma 2.10, we obtain
Applying the monotonicity of A, it follows that
Letting \(k\rightarrow \infty ,\) and since \(Aw_n\) is bounded and \(\lim _{n\rightarrow \infty }||w_n- y_n||=0,\) we have
By Lemma 2.14, it follows that \(\hat{x}\in VI(C,A)\) as required. \(\square \)
Now, we state and prove the strong convergence theorem for Algorithm 3.1.
Theorem 3.6
Suppose \(\{x_n\}\) is a sequence generated by Algorithm 3.1 such that conditions (A)-(C) are satisfied. Then, the sequence \(\{x_n\}\) converges strongly to \(x^\dagger =\Pi _\Omega x_1.\)
Proof
The proof is divided into four steps as follows:
Step 1: We show that \(\Omega \subset D_n\cap Q_n\) for each \(n\in {\mathbb {N}}.\)
We note that \(D_n\) and \(Q_n\) are half-spaces for each \(n\in {\mathbb {N}}.\) Let \(p\in \Omega ,\) then by property (P3) of the Lyapunov functional we have
Also, we have that for each \(p\in \Omega \)
By the monotonicity of A, we have that
By applying Lemma 2.10 we get
Using (3.8) together with (3.9) we obtain
Applying (3.10), we have
By property (P3) of the Lyapunov functional, and using (3.6) and (3.11), we obtain
Combining (3.7) together with (3.12) gives
From the construction of \(D_n,\) it follows that \(p\in D_n\) for each \(n\in {\mathbb {N}},\) and this implies that \(\Omega \subset D_n\) for each \(n\in {\mathbb {N}}.\) For \(n=1,\) we have that \(Q_1=E\) and it follows that \(\Omega \subset D_1\cap Q_1.\) Suppose \(x_k\) is given and \(\Omega \subset D_k\cap Q_k\) for some \(k\in {\mathbb {N}}.\) From \(x_{k+1}=\Pi _{D_k\cap Q_k}x_1\) and by applying Lemma 2.10, we get
Since \(\Omega \subset D_k\cap Q_k,\) it follows that
From the construction of \(Q_n,\) it follows that \(\Omega \subset Q_{k+1}.\) Hence, \(\Omega \subset D_{k+1}\cap Q_{k+1}.\) By induction, we have that \(\Omega \subset D_n\cap Q_n\) as desired.
Step 2: Next, we show that \(\{x_n\}\) is bounded. From the construction
and by Lemma 2.10, we get
From this it follows that
Since \(\Omega \subset Q_n,\) then we have
and this implies that \(\{\phi (x_n,x_1)\}\) is bounded. Therefore, by property (P1) of the Lyapunov functional, we have that \(\{x_n\}\) is bounded.
Step 3: Next, we show that \(w_\omega (x_n)\subset \Omega .\)
Since \(x_{n+1}\in Q_n,\) then it follows from (3.15) that
Hence, there exists
Since \(x_{n+1}\in Q_n,\) then we have
which implies that
From this, it follows that
and by Lemma 2.5, we have
Since J is uniformly norm-to-norm continuous on each bounded subset of E, we have
From the construction of \(w_n,\) we have \(Jw_n-Jx_n =\alpha _n(Jx_n-Jx_{n-1}).\) Since \(\{\alpha _n\}\) is bounded, then we get
By Lemma 2.17, we have
and this gives
Thus,
Since \(\{x_n\}\) and \(\{w_n\}\) are bounded, then it follows from Remark 2.6 that
By the definition of \(\lambda _{n+1},\) using the fact that \(x_{n+1}\in D_n\) and applying Lemma 2.9, we have
By Remark 3.4, conditions on the control parameters and applying (3.19) and (3.22), we have
From this, it follows that
Combining (3.20) and (3.25), we obtain
Since J is uniformly norm-to-norm continuous on each bounded subset of E, we have
By the boundedness of \(\{x_n\},\) there exists a subsequence \(\{x_{n_k}\}\) of \(\{x_n\}\) such that \(x_{n_k}\rightharpoonup z.\) From (3.25) and by applying Lemma 3.5, we obtain \(z\in VI(C,A).\) Hence, it follows that
Next, we show that \(w_\omega (x_n)\subset \cap _{i=1}^\infty F(S_i).\) Applying property (P3) of the Lyapunov functional, we get
From (3.26) and (3.27), and using property (P5) of the Lyapunov functional we have
By applying (3.13), (3.22), (3.25), and (3.29) together with the Lipschitz continuity of A, we have
Hence, it follows that
By the property of g, it follows that
Since \(J^{-1}\) is uniformly norm-to-norm continuous on bounded sets, we have
By using (3.26) and (3.31), we get
Hence,
From this, it follows that
By (3.26) and (3.33), and the definition of \(S_i\) for all \(i\ge 1,\) we have
and this implies that
Hence, we have
From (3.28) and (3.34), we obtain
Step 4: Lastly, we show that \(x_n\rightarrow x^\dagger =\Pi _\Omega x_1\) as \(n\rightarrow \infty .\)
Since the norm is convex and lower semicontinuous, we have that
Using this, we get
Using the fact that \(x^\dagger =\Pi _\Omega x_1, z\in \Omega ,\) and together with (3.16) and (3.35), we obtain
Then, if follows that
Since \(x^\dagger =\Pi _\Omega x_1,\) then it implies that \(z=x^\dagger ,\) i.e. \(w_\omega (x_n)=\{x^\dagger \}.\) Therefore,
and
Since
then we get
From
we get
By Lemma 2.5, it follows that \(x_n\rightarrow x^\dagger \) as \(n\rightarrow \infty ,\) which completes the proof. \(\square \)
Next we obtain some consequent results of Theorem 3.6.
If the relatively nonexpansive multivalued mappings \(S_i,~ i\in {\mathbb {N}}\) in Theorem 3.6 are single-valued relatively nonexpansive mappings, then we obtain the following result.
Corollary 3.7
Let \(S_i:E\rightarrow E,~ i\in {\mathbb {N}}\) be an infinite family of single-valued relatively nonexpansive mappings such that \(\cap _{i=1}^\infty F(S_i)\ne \emptyset ,\) and let \(\{x_n\}\) be a sequence generated as follows
Suppose that the solution set \(\Omega =VI(C,A)\cap \bigcap _{i=1}^\infty F(S_i)\) is nonempty and other conditions in Theorem 3.6 are satisfied. Then the sequence \(\{x_n\}\) generated by Algorithm 3.8 converges strongly to \(x^\dagger =\Pi _\Omega x_1.\)
Remark 3.9
Corollary 3.7 improves and extends the results in [33] in the following senses:
-
(i) The projection onto the closed convex set is replaced with a projection onto a half-space, which can easily be computed.
-
(ii) The inertial term and adaptive step size incorporated guarantee better convergence properties.
-
(iii) The result extends the fixed point problem from single relatively nonexpansive mapping to an infinite family of relatively nonexpansive mappings.
The following result follows from Lemma 2.13 and Corollary 3.7.
Corollary 3.10
Let \(S_i:E\rightarrow E,~ i\in {\mathbb {N}}\) be an infinite family of generalized nonspreading mappings such that \(\cap _{i=1}^\infty F(S_i)\ne \emptyset ,\) and let \(\{x_n\}\) be a sequence generated as follows:
Suppose that the solution set \(\Omega =VI(C,A)\cap \bigcap _{i=1}^\infty F(S_i)\) is nonempty and other conditions in Theorem 3.6 are satisfied. Then the sequence \(\{x_n\}\) generated by Algorithm 3.11 converges strongly to \(x^\dagger =\Pi _\Omega x_1.\)
4 Application
4.1 Constrained Convex Minimization and Fixed Point Problems
In this section, we give an application of our result to find a common solution of constrained convex minimization problem [7, 36, 44] and common fixed point problem in Banach spaces.
Let E be a real Banach space and C be a nonempty closed convex subset of E. The constrained convex minimization problem is to find a point \(x^*\in C\) such that
where f is a real-valued convex function. Convex optimization theory is a powerful tool for solving many practical problems in operational research. In particular, it has been widely employed to solve practical minimization problems over complicated constraints [37], e.g., convex optimization problems with a fixed point constraint and with a variational inequality constraint.
The following lemma will be required.
Lemma 4.1
[43] Let E be a real Banach space and let C be a nonempty closed convex subset of \(E>\) Let f be a convex function of E into \({\mathbb {R}}.\) If f is Fréchet differentiable, then z is a solution of problem (4.1) if and only if \(z\in VI(C,\triangledown f).\)
By applying Theorem 3.6 and Lemma 4.1, we can find a common solution of the constrained convex minimization problem (4.1) and fixed point problem for an infinite family of relatively nonexpansive multivalued mappings.
Theorem 4.2
Let E be a 2-uniformly convex and uniformly smooth Banach space with the 2-uniformly convexity constant \(\frac{1}{c},\) and \(S_i:E\rightarrow CB(E),~ i\in {\mathbb {N}}\) be an infinite family of relatively nonexpansive multivalued mappings such that \(\cap _{i=1}^\infty F(S_i)\ne \emptyset .\) Let \(f:E\rightarrow {\mathbb {R}}\) be a fréchet differentiable convex function and suppose \(\triangledown f\) is L-Lipschitz continuous with \(L>0.\) Assume that problem (4.1) is consistent. Let \(\{x_n\}\) be a sequence generated as follows:
Suppose that the solution set \(\Omega =VI(C,\triangledown f)\cap \bigcap _{i=1}^\infty F(S_i)\) is nonempty and other conditions in Theorem 3.6 are satisfied. Then the sequence \(\{x_n\}\) generated by Algorithm 4.3 converges strongly to \(x^\dagger =\Pi _\Omega x_1.\)
Proof
Since f is convex, then \(\triangledown f\) is monotone [43]. Letting \(A=\triangledown f\) in Theorem 3.6, we obtain the desired result from Lemma 4.1. \(\square \)
5 Numerical Example
In this section, we present a numerical example to demonstrate the efficiency of our Algorithm 3.1 in comparison with Algorithm 1.4 and Appendix 7.1. All numerical computations were carried out using Matlab version R2019 (b).
We choose \(\beta _{n,0}=\frac{9}{10}, \beta _{n,i}=\frac{9^{i-1}}{10^{i+1}}, \lambda _1= 0.7, \mu =0.9, \alpha _n=(1 + \frac{56000}{n})^{n}.\)
Example 5.1
Suppose that \(E=L^2([0,1])\) with inner product
and induced norm
Let C[0, 1] denote the continuous function space defined on the interval [0, 1] and choose an arbitrary fixed \(\varphi \in C[0,1].\) Let \(C:=\{x\in E: ||\varphi x||\le 1\}.\) It can easily be verified that C is a nonempty closed convex subset of E. Define an operator \(A:E\rightarrow E^*\) by
Then A is 2-Lipschitz continuous and monotone on E (see [17]). With these given C and A, the solution set to the VIP (1.1) is given by \(VI(C,A)=\{0\}\ne \emptyset .\) Define \(g:E\rightarrow {\mathbb {R}}\) by
then g is a convex function and C is a level set of g, i.e. \(C=\{x\in E : g(x)\le 0\}.\) Also, g is differentiable on E and \(\partial g(x)=\varphi ^2x,~~\forall ~x\in E\) (see [16]). In this numerical example, we choose \(\varphi (t)=e^{-t},~~ \forall ~ t\in [0,1].\) Let \(S_i:L^2([0,1])\rightarrow L^2([0,1])\) be defined by
Observe that \(F(S_i)\ne \emptyset \) since \(0\in F(S_i)\) for each \(i\in {\mathbb {N}}.\) Moreover, \(S_i\) are nonexpansive for each \(i\in {\mathbb {N}},\) and hence quasi-nonexpansive (Note that in a Hilbert space relatively nonexpansive mapping reduces to quasi-nonexpansive mapping). Therefore, the solution set of the problem is \(\Omega = \{0\}.\) We test the algorithms for three different starting points using \(||x_{n+1}-x_{n}||<\epsilon \) as stopping criterion, where \(\epsilon =10^{-6}.\) The numerical result is reported in Fig. 1 and Table 1.
-
Case I: \(x_0 = t^{99} + t,\) \(x_1 = t^9 + t^4 + t^3 + t^2 + t;\)
-
Case II: \(x_0 = \frac{1}{2}t,\) \(x_1= \exp (t);\)
-
Case III: \(x_0 = \cos (5\pi t),\) \(x_1 = t + 1.\)
Remark 5.2
We use different starting points in Example 5.1 and numerical results are reported in Table 1 and Figs. 1, 2 and 3. We compared our proposed method with some existing methods and observed from Table 1 and Figs. 1, 2 and 3 that our proposed method performs better than these existing methods in term of number of iterations and fairly okay in term of CPU time.
6 Conclusion
The variational inequality theory plays an important role in nonlinear analysis and optimization. It has been shown that this theory provides the most natural, direct, simple, unified and efficient framework for a general treatment of a wide class of unrelated linear and nonlinear problems. In this paper, we introduce an inertial hybrid gradient method with self-adaptive step size for finding a common solution of variational inequality and common fixed point problems for an infinite family of relatively nonexpansive multivalued mappings in Banach spaces. Unlike in many existing hybrid gradient methods, here the projection onto the closed convex set is replaced with projection onto some half-space which is easier to implement. We obtain a strong convergence result for the proposed method without the knowledge of the Lipschitz constant of the monotone operator and we apply our result to find a common solution of constrained convex minimization and common fixed point problems in Banach spaces. Our results extend and improve the results in [33] and [43], and several other results in the current literature in this direction.
Data Availability
Not applicable.
References
Alakoya, T.O., Mewomo, O.T.: S-iteration inertial subgradient extragradient method for variational inequality and fixed point problems. Optimization (2023). https://doi.org/10.1080/02331934.2023.2168482
Alakoya T.O., Mewomo O.T.: Viscosity S-iteration method with inertial technique and self-adaptive step size for split variational inclusion, equilibrium and fixed point problems. Comput. Appl. Math. 41(1) (2022) (paper no. 39)
Alakoya T.O., Uzor V.A., Mewomo O.T.: A new projection and contraction method for solving split monotone variational inclusion, pseudomonotone variational inequality, and common fixed point problems. Comput. Appl. Math. 42(1) (2023) (paper no. 3)
Alakoya T.O., Uzor V.A., Mewomo O.T., Yao J.-C.: On system of monotone variational inclusion problems with fixed-point constraint. J. Inequal. Appl. 2022 (2022) (art. no. 47)
Alber, Y., Ryazantseva, I.: Nonlinear Ill-Posed Problems of Monotone Type. Springer, London (2006)
Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problem. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
Ceng, L.C., Ansari, Q.H., Yao, J.C.: Some iterative methods for finding fixed points and for solving constrained convex minimization problems. Nonlinear Anal. 74, 5286–5302 (2021)
Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148(2), 318–335 (2011)
Chang, S.S., Kim, J.K., Wang, X.R.: Modified block iterative algorithm for solving convex feasibility problems in Banach spaces. J. Inequal. Appl. 2010, Art. ID 869684 (2010)
Chan, R.H., Ma, S., Jang, J.F.: Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J. Imaging Sci. 8(4), 2239–2267 (2015)
Cholamjiak, P., Shehu, Y.: Inertial forward-backward splitting method in Banach spaces with application to compressed sensing. Appl. Math. 64(4), 409–435 (2019)
Cholamjiak, P., Suantai, S.: Iterative methods for solving equilibrium problems, variational inequalities and fixed points of nonexpansive semigroups. J. Glob. Optim. 57, 1277–1297 (2013)
Cholamjiak, P., Thong, D.V., Cho, Y.J.: A novel inertial projection and contraction method for solving pseudomonotone variational inequality problems. Acta Appl. Math. 169, 217–245 (2020)
Godwin, E.C., Alakoya, T.O., Mewomo, O.T., Yao, J.-C.: Relaxed inertial Tseng extragradient method for variational inequality and fixed point problems. Appl. Anal. (2022). https://doi.org/10.1080/00036811.2022.2107913
Godwin, E.C., Izuchukwu, C., Mewomo, O.T.: Image restoration using a modified relaxed inertial method for generalized split feasibility problems. Math. Methods Appl. Sci. (2022). https://doi.org/10.1002/mma.8849
He, S., Dong, Q.: Tian, H: Relaxed projection and contraction methods for solving Lipschitz continuous monotone variational inequalities. Rev. R. Acad. Cienc. Exactas Fís. Nat. Ser. A Mat. (RACSAM) 113, 2773–2791 (2019)
Hieu, D.V., Anh, P.K., Muu, L.D.: Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput. Optim. Appl. 66, 75–96 (2017)
Hieu, D.V., Cholamjiak, P.: Modified extragradient method with Bregman distance for variational inequalities. Appl. Anal. 101(2), 655–670 (2022)
Homaeipour, S., Razani, A.: Weak and strong convergence theorems for relatively nonexpansive multi-valued mappings in Banach spaces. Fixed Point Theory Appl. 2011 (2011) (Art. 73)
Hsu, M.H., Takahashi, W., Yao, J.C.: Generalized hybrid mappings in Hilbert spaces and Banach spaces. Taiwan. J. Math. 16(1), 129–149 (2012)
Iiduka, H.: Acceleration method for convex optimization over the fixed point set of a nonexpansive mappings. Math. Prog. Ser. A. 149(1–2), 131–165 (2015)
Iiduka, H., Takahashi, W.: Weak convergence of a projection algorithm for variational inequalities in a Banach space. J. Math. Anal. Appl. 339, 668–679 (2008)
Jolaoso, L.O., Alakoya, T.O., Taiwo, A., Mewomo, O.T.: Inertial extragradient method via viscosity approximation approach for solving equilibrium problem in Hilbert space. Optimization 70(2), 387–412 (2021)
Jolaoso, L.O., Khamsi, M.A., Mewomo, O.T., Okeke, C.C.: On inertial type algorithms with generalized contraction mapping for solving monotone variational inclusion problems. Fixed Point Theory 22(2), 685–711 (2021)
Kamimura, S., Takahashi, W.: Strong convergence of a proximal-type algorithm in a Banach space. SIAM J. Optim. 13(2), 938–945 (2003)
Korpelevich, G.M.: An extragradient method for finding saddle points and other problems. Ekon. Mat. Metody 12, 747–756 (1976)
Kohsaka, F., Takahashi, W.: Existence and approximation of fixed points of firmly nonexpansive-type mappings in Banach spaces. SIAM J. Optim. 19(2), 824–835 (2008)
Liu, Y.: Strong convergence of the Halpern subgradient extragradient method for solving variational inequalities in Banach spaces. J Nonlinear Sci. Appl. 10, 395–409 (2017)
Lorenz, D., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51(2), 311–325 (2015)
Maingé, P.E.: A hybrid extragradient-viscosity method for monotone operators and fixed point problems. SIAM J. Control Optim. 47, 1499–1515 (2008)
Maingé, P.E.: Convergence theorem for inertial KM-type algorithms. J. Comput. Appl. Math. 219, 223–236 (2008)
Matsushita, S.-Y., Takahashi, W.: A strong convergence theorem for relatively nonexpansive mappings in a Banach space. J. Approx. Theory 134, 257–266 (2005)
Nakajo, K.: Strong convergence for gradient projection method and relatively nonexpansive mappings in Banach spaces. Appl. Math. Comput. 271, 251–258 (2015)
Ogwo, G.N., Alakoya, T.O., Mewomo, T.O.: An inertial subgradient extragradient method with Armijo type step size for pseudomonotone variational inequalities with non-Lipschitz operators in Banach spaces. J. Ind. Manag. Optim. (2022). https://doi.org/10.3934/jimo.2022239
Ogwo, G.N., Izuchukwu, C., Mewomo, O.T.: Relaxed inertial methods for solving split variational inequality problems without product space formulation. Acta Math. Sci. Ser. B (Engl. Ed.) 42(5), 1701–1733 (2022)
Okeke, C.C., Izuchukwu, C., Mewomo, O.T.: Strong convergence results for convex minimization and monotone variational inclusion problems in Hilbert space. Rend. Circ. Mat. Palermo (2) 69(2), 675–693 (2020)
Panyanak, B.: Ishikawa iteration processes for multi-valued mappings in Banach spaces. Comput. Math. Appl. 54, 872–877 (2007)
Polyak, B.T.: Some methods of speeding up the convergence of iterative methods. Zh. Vychisl. Mat. Mat. Fiz. 4, 1–17 (1964)
Reich, S., Tuyen, T.M., Sunthrayuth, P., Cholamjiak, P.: Two new inertial algorithms for solving variational inequalities in reflexive Banach spaces. Numer. Funct. Anal. Optim. 42(16), 1954–1984 (2021)
Taiwo, A., Alakoya, T.O., Mewomo, O.T.: Halpern-type iterative process for solving split common fixed point and monotone variational inclusion problem between Banach spaces. Numer. Algorithms 86(4), 1359–1389 (2021)
Taiwo, A., Owolabi, A.O.-E., Jolaoso, L.O., Mewomo, O.T., Gibali, A.: A new approximation scheme for solving various split inverse problems. Afr. Mat. 32(3–4), 369–401 (2021)
Shehu, Y., Cholamjiak,P.: Iterative method with inertial for variational inequalities in Hilbert spaces. Calcolo 56(1) (2019) (paper no. 4)
Tian, M., Jiang, B.: Inertial Haugazeau’s hybrid subgradient extragradient algorithm for variational inequality problems in Banach spaces. Optimization 70(5–6), 987–1007 (2021)
Tian, M., Liu, L.: General iterative methods for equilibrium and constrained convex minimization problem. Optim. Nonlinear Anal. 63(9), 1367–1385 (2014)
Uzor, V.A., Alakoya, T.O., Mewomo, O.T.: On split monotone variational inclusion problem with multiple output sets with fixed point constraints. Comput. Methods Appl. Math. (2022). https://doi.org/10.1515/cmam-2022-0199
Uzor, V.A., Alakoya, T.O., Mewomo, O.T.: Strong convergence of a self-adaptive inertial Tseng’s extragradient method for pseudomonotone variational inequalities and fixed point problems. Open Math. 20, 234–257 (2022)
Xu, H.K.: Strong convergence of approximating fixed point sequences for nonexpansive mappings. Bull. Aust. Math. Soc. 74, 143–151 (2016)
Zǎlinescu, C.: On uniformly convex function. J. Math. Anal. Appl. 95, 344–374 (1983)
Acknowledgements
The authors sincerely thank the editor and anonymous referee for their careful reading, constructive comments and useful suggestions that improved the manuscript. The first author is supported by the National Research Foundation (NRF) of South Africa Incentive Funding for Rated Researchers (Grant Number 119903). Opinions expressed and conclusions arrived are those of the authors and are not necessarily to be attributed to the NRF.
Funding
Open access funding provided by University of KwaZulu-Natal.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors declare that they have no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
7. Appendix
7. Appendix
Appendix 7.1
Algorithm 4.1 in [28].
where E is a two-uniformly convex and uniformly smooth Banach space with the two-uniformly convexity constant \(c_1, S:E\rightarrow E\) is a relatively nonexpansive mapping and \(A:E\rightarrow E^*\) is a monotone and L-Lipschitz mapping with \(L>0, \{\lambda _n\}\) is a real number sequence satisfying \(0<\inf _{n\ge 1}\lambda _n\le \sup _{n\ge 1}\lambda _n<\frac{c_1}{L}, \{\beta _n\}\subset [a,b]\subset [0,1]\) for some \(a,b\in (0,1), \{\alpha _n\}\subset (0,1)\) with \(\lim _{n\rightarrow \infty }\alpha _n=0\) and \(\sum _{n=1}^\infty \alpha _n=\infty .\)
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Mewomo, O.T., Alakoya, T.O. & Khan, S.H. Inertial hybrid gradient method with adaptive step size for variational inequality and fixed point problems of multivalued mappings in Banach spaces. Afr. Mat. 34, 48 (2023). https://doi.org/10.1007/s13370-023-01087-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s13370-023-01087-z
Keywords
- Inertial algorithm
- Hybrid gradient method
- Adaptive step size
- Variational inequality
- Fixed point problem
- Relatively nonexpansive multivalued mappings
- Convex minimization problems