Abstract
In this paper, we combine the gradient projection algorithm and the hybrid steepest descent method and prove the strong convergence to a common element of the equilibrium problem; the null space of an inverse strongly monotone operator; the set of fixed points of a continuous pseudocontractive mapping and the minimizer of a convex function. This common element is proved to be the unique solution of a variational inequality problem.
MSC:47H06, 47H09, 47J05, 47J25.
Similar content being viewed by others
1 Introduction
Let H be a real Hilbert space with inner product and norm and let K be a nonempty, closed, and convex subset of H. Let F be a bifunction of into ℝ. The equilibrium problem for is to find such that
The set of solutions of (1.1) is denoted by . For a given nonlinear operator A, the problem of finding such that
is called the variational inequality problems and the set of solutions of the is denoted by .
Given a mapping , let for all , then if and only if , , that is, z is a solution of the variational inequality (1.2).
The mapping is said to be Lipschitz if there exists such
The operator T is said to be a contraction if in (1.3) , and nonexpansive if . Let H be a real Hilbert space and K, a nonempty subset of H. A mapping is said to be pseudocontractive if for all ,
Equivalently, (1.4) can be written as
The set of fixed points of a mapping T is denoted by .
In what follows, we shall use → for strong convergence and ⇀ for weak convergence.
For every point , there exists a unique nearest point in K denoted by such that , . The map is called the metric projection of H onto K. It is also well known that satisfies
Moreover, is characterized by the properties that
Consider the optimization problem:
where is a real valued convex functional. If f is a continuously Fréchet differentiable convex functional on K, then is a solution of the optimization problem (1.6) if and only if the optimality condition
holds.
Using the characterization of the projection operator, one can easily show that solving the variational inequality (1.7) is equivalent to solving the fixed point problem of finding which satisfies the relation
where is a constant. A formulation of the iterative scheme for the variational inequality problem (1.7) may be as follows: for arbitrary , define by
or more generally
where the parameters μ, are positive real numbers known as step-size. The scheme (1.9) has been considered with several step-size rules:
-
Constant step-size, where for some , we have for all n.
-
Diminishing step-size, where and .
-
Polyaks step-size, where , where is the optimal value of (1.6).
-
Modified Polyaks step-size, where and for some scalar .
The constant step-size rule is suitable when we are interested in finding an approximate solution to the problem (1.6). The diminishing step-size rule is an off-line rule and is typically used with or for some distributed implementations of the method.
These schemes are the well-known Gradient Projection Algorithms. However, the convergence of these schemes requires that the operator ∇f must be Lipschitz continuous and strongly monotone, which is a strong condition and restrictive in application. If ∇f is Lipschitz continuous and strongly monotone on H, it is obvious that the map is a strict contraction and by the Banach contraction principle, the sequence defined by (1.8) converges strongly to the unique minimizer of (1.6) which is the solution of the variational inequality problem (1.7). Another limitation of the scheme in (1.8) is that it is based on the assumption that the closed form expression of is well known, whereas in many situations it is not.
The iterative approximation of fixed points and zeros of the nonlinear operators has been studied extensively by many authors to solve nonlinear operator equations as well as variational inequality problems (see [1, 2], and the references therein).
Ceng et al. [3] studied the following algorithm:
where and , , and they proved that the sequence converges strongly to a minimizer of a constrained convex minimization problem which also solves a certain variational inequality.
For , and as in Lemma 2.2 and Lemma 2.3, respectively, Ofoedu [4] introduced the following iteration scheme:
and proved that if H is a real Hilbert space; is a continuous pseudocontractive mapping; , , is a countable infinite family of nonexpansive mappings; is a bifunction satisfying (A1)-(A4); a proper lower semicontinuous convex function; a continuous monotone mapping; is a fixed vector; is a strongly positive bounded linear operator with coefficient γ; is an η-inverse strongly monotone mapping and the sequences , , satisfy appropriate conditions, then the sequence converges strongly to a unique solution of the variational inequality , .
In 2001, Yamada [5] introduced the hybrid steepest descent method which solves the variational inequality over the set K of fixed points of a nonexpansive map T. In particular, he studied the following:
and proved the following theorem.
Theorem IY [5]
Assume that H is a real Hilbert space and is nonexpansive such that , and is η-strongly monotone and L-Lipschitz. Let . Assume also that the sequence satisfies the following conditions:
-
(i)
, as ,
-
(ii)
,
-
(iii)
or .
Take , arbitrary and define by (1.9), then converges strongly to the unique solution of where K is the set of fixed points of T.
The scheme (1.9) minimizes certain convex functions over the intersection of fixed point sets of nonexpansive mappings if , say, where f is a continuously Fréchet differentiable convex function. The scheme solves the variational inequality and does not require the closed form expression of but, instead, requires a closed form expression of a nonexpansive mapping T, whose set of fixed points is K.
Motivated by the work of Yamada [5], Tian [6] introduced the following scheme:
and he proved that if , , satisfy certain conditions, then the sequence given by (1.10) converges strongly to , which solves the variational inequality , .
In 2012, Tian and Liu [7] introduced the following scheme:
where , , and proved that if C is a nonempty, closed, and convex subset of a real Hilbert space H; Φ a bifunction from into ℝ satisfying (A1)-(A4); a real valued convex function; ∇f is an L-Lipschitzian mapping with ; where Ω is the solution set of a minimization problem; a k-Lipschitzian continuous and η-strongly monotone operator with constants ; , and the sequences , , satisfy appropriate conditions, then the sequence generated by converges strongly to a point which solves the variational inequality , .
In this paper, motivated by the results of Ofoedu [4], Yamada [5], Tian [6], Tian and Liu [7], we shall study a new iterative scheme and prove the strong convergence to a common element of the equilibrium problem; the null space of an inverse strongly monotone operator; the set of fixed points of a continuous pseudocontractive mapping and the minimizer of a convex function. This common element is proved to be the unique solution of a variational inequality problem.
2 Preliminaries
For solving the equilibrium problem for a bifunction , let us assume that F satisfies the following conditions:
(A1) , .
(A2) F is monotone, i.e., , .
(A3) For each , , .
(A4) For each , the function is convex and lower semicontinuous.
Lemma 2.1 (Blum and Oettli [8])
Let K be nonempty, closed, and convex subset of H and f a bifunction of satisfying (A1)-(A4). For and , there exists such that
Lemma 2.2 (Zegeye [9])
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let be a continuous pseudocontractive mapping. Then, for and , there exists such that
Furthermore, if
, then the following holds:
(C1) is single valued;
(C2) is firmly nonexpansive, i.e., for any
(C3) ;
(C4) is closed and convex.
Lemma 2.3 (Combettes and Hirstoaga [10])
Assume that satisfies (A1)-(A4). For and , define by
then the following holds:
(B1) is single valued;
(B2) is firmly nonexpansive, i.e., for any
(B3) ;
(B4) is closed and convex.
Lemma 2.4 (Ofoedu [4])
Let C be a nonempty, closed, and convex subset of a real Hilbert space H. Let be a continuous pseudocontractive mapping. For , let be the mapping in Lemma 2.2, then for any and for any ,
Recall that a mapping is said to be monotone if , . In particular, the mapping A is called
-
(1)
η-strongly monotone over K if there exists such that , ;
-
(2)
α-inverse strongly monotone over K if there exists such that , .
Lemma 2.5 Let be monotone over a closed and convex subset K of H, then the following statements are equivalent:
-
(1)
is a solution of if , .
-
(2)
For fixed , .
Let be a sequence of nonnegative real numbers satisfying the following relation:
where
-
(i)
, ,
-
(ii)
or .
Then .
Lemma 2.7 Let H be a real Hilbert space, then for all , the following hold:
-
(i)
;
-
(ii)
.
Lemma 2.8 (Demiclosedness Principle [12])
Let be a nonexpansive mapping with . If is a sequence in C such that and then .
Definition 2.9 A map is called averaged if there exists a nonexpansive mapping S on H and such that
and we say that T is α-averaged.
Remark 2.10
-
(i)
Firmly nonexpansive maps are -averaged. Thus, a map T is firmly nonexpansive if and only if where S is nonexpansive and I an identity mapping on H.
-
(ii)
Every averaged mapping is nonexpansive.
-
(iii)
A map S is nonexpansive if and only if is -inverse strongly monotone.
-
(iv)
If A is η-inverse strongly monotone, and , then λA is -inverse strongly monotone.
Lemma 2.11 A map is averaged if and only if is η-inverse strongly monotone for . In particular, for , T is α-averaged if and only if is -inverse strongly monotone.
Lemma 2.12 Let , . If A is averaged and S is nonexpansive, then T is averaged.
Remark 2.13
-
(i)
A map N is firmly nonexpansive if it is 1-inverse strongly monotone.
-
(ii)
N is firmly nonexpansive if and only if is firmly nonexpansive.
-
(iii)
Every firmly nonexpansive map is averaged.
-
(iv)
If , , where N is firmly nonexpansive and S is nonexpansive, then T is averaged.
-
(v)
If , , is a family of nonexpansive mappings, then the mapping is nonexpansive.
-
(vi)
If , , is a family of averaged mappings, then the mapping is averaged. If is -averaged and is -averaged for some , then is α-averaged with .
Let be α-inverse strongly monotone, i.e.,
When , (2.1) implies that A is firmly nonexpansive and hence, A is nonexpansive. Thus, a map A is firmly nonexpansive if and only if it is 1-inversely strongly monotone. From the Schwartz inequality, we find that α-inverse being strongly monotone implies -Lipschitz continuity. However, the converse is not true. For instance, (I is the identity mapping on H) is nonexpansive (hence, 1-Lipschitz) but not firmly nonexpansive, hence not 1-inversely strongly monotone. In 1977, Baillon and Haddad [13] showed that if and A is the gradient of a convex function, say f, i.e., , then -Lipschitz continuity implies α-inverse strongly monotonicity and vice versa.
If ∇f is L-Lipschitz, then ∇f is -inverse strongly monotone and is -inverse strongly monotone. Then, by Lemma 2.11, is -averaged. The projection map is firmly nonexpansive and hence is -averaged. The composition is α-averaged (from Remark 2.13) with , . Now, for , is -averaged, so that from Remark 2.13, we have , where is nonexpansive and , (see [14–22], and the references therein).
3 Main result
Remark 3.1 In what follows, let K be a nonempty, closed, and convex subset of a real Hilbert space H. Let be a bifunction satisfying (A1)-(A4) and let be a continuous pseudocontractive mapping. Let be a real valued convex function and assume that ∇f is -inverse strongly monotone mapping with . Let be a k-Lipschitz continuous and η-strongly monotone mapping with constants and , . Let Θ denote the solution set of the minimization problem in (1.6). Let be an γ-inverse strongly monotone mapping. Assume that . Let , , satisfy the following conditions:
-
(i)
, , , ,
-
(ii)
, , ,
-
(iii)
, , ,
and let ε be a real constant such that . For , , are as in Lemma 2.2 and Lemma 2.3.
Consider the sequence generated iteratively from arbitrary by
we shall study the strong convergence of the iteration scheme to a unique solution where solves the variational inequality , , and and we have , where , is nonexpansive.
Lemma 3.2 Suppose the conditions of Remark 3.1 are satisfied, then defined by (3.1) is bounded.
Proof We first show that is nonexpansive. For and , we have
which implies that
and hence is nonexpansive.
Let . Let ; ; , then , , and we have
For all , define by where A is a k-Lipschitzian and η-strongly monotone mapping on H. Assume that , for , we have
From (3.5), we have
where . Hence, is a strict contraction and by the Banach contraction principle, it has a unique fixed point in H.
Now, for and from (3.1) and (3.6), we have
By induction, we get
Therefore is bounded. Consequently we find that , , , are bounded. □
Lemma 3.3 Suppose that the conditions of Remark 3.1 are satisfied, and is as defined by (3.1), then
Proof For any , we have
which shows that is bounded.
Similarly, we have
Hence, is bounded.
Noting that and from , we get and we compute as follows:
where
Now, from Lemma 2.4, (3.6), and (3.7), we have
where , . Therefore,
where .
Since, , and , we have
Substitute in (3.10) and in (3.11) to get
From (A2), we have (3.12) + (3.13):
Without loss of generality, let us assume that there exists a real number c such that for all . We now have
which implies that
where .
From (3.9) and (3.14), we have
Using conditions on , , , and Lemma 2.6, we get
Consequently, from (3.14) and (3.15), we have
□
Lemma 3.4 Suppose that the conditions of Remark 3.1 are satisfied, and is as defined by (3.1), then
Proof Observe that
Furthermore, for and using (B2) we have
which implies that
From (3.1) and (3.17), we have the following estimate:
Hence
Since and as , we have
Similarly, using (C2), we have
hence,
From (3.18), we have
Since , , we have
Furthermore
hence,
From (3.18), we obtain
On re-arranging, we have
Since , as , we have
Since , we use the sandwich theorem in (3.22) to obtain
Using (3.19), (3.20), (3.23), (3.24), we obtain
Furthermore, for ,
Now,
which implies that
From (3.18), we obtain
Using (3.27) in (3.26), we obtain
Using the fact that , , , as , we deduce that
From (3.25) and (3.28), we have
□
Lemma 3.5 Suppose that the conditions of Remark 3.1 are satisfied, and is as defined by (3.1). Let be the unique solution of the variational inequality , . Then
where .
Proof To show this inequality, we choose a subsequence of such that
correspondingly, there exists a subsequence of . Since is bounded, there exist a subsequence of and such that . Without loss of generality, we may assume that . Since and K is closed and convex, K is weakly closed. So, we have . Let us show that . First, we show that . Since , we have
It follows from (A2) that
hence,
Since, , as , it follows that , . For and , let . Since and , we have so that . We have from (A1) and (A4)
That is, . It follows from (A3) that , . Since, m is taken arbitrarily, it follows that .
We show that . Recall that so that
Put , and . Consequently, we get . From (3.30) and the pseudocontractivity of T, we have
Since as , (3.31) becomes
taking the limit as and using the fact that T is continuous, (3.32) becomes
Put and we have
which implies that .
We now show that . Observe that for ,
by (3.23). Let as . If and , by the nonexpansive property of , and Lemma 2.8, , where ; hence, .
Next we show that , the null space of B. We make the following estimate:
which implies that
From (3.15) and the condition on , we obtain
Using (3.15), (3.25) in (3.33), we have
Replace n by in (3.34) to get
Since the map is nonexpansive from (3.3), we deduce from the demiclosedness principle that
which implies that or (); hence, we get and conclude that .
Since , it follows that
□
Theorem 3.6 Suppose that the conditions of Remark 3.1 are satisfied, and is as defined by (3.1), then converges strongly to , which is a unique solution of the variational inequality , .
Proof Let , then
where and .
Apply Lemma 2.6 to (3.36) to conclude that . □
Remark 3.7 The prototype sequences are
Remark 3.8 Our result is an extension of the result of Tian and Liu [7] and better applicable.
Remark 3.9 The scheme is found to be better applicable than the results of Yamada [5] and Tian [6] who worked on a single nonexpansive mapping.
References
Xu HK: An iterative approach to quadratic optimization. J. Optim. Theory Appl. 2003, 116(3):659–678. 10.1023/A:1023073621589
Ishikawa S: Fixed point by a new iterative method. Proc. Am. Math. Soc. 1974, 44: 147–150. 10.1090/S0002-9939-1974-0336469-5
Ceng LC, Ansari QH, Yao JC: Some iterative methods for finding fixed points and for solving constrained minimization problems. Nonlinear Anal. 2011, 74: 5286–5302. 10.1016/j.na.2011.05.005
Ofoedu E: A general approximation scheme for solutions of various problems in fixed point theory. Int. J. Anal. 2013., 2013: Article ID 762831
Yamada I: The hybrid steepest descent method for variational inequality problems over the intersection of fixed point sets of nonexpansive mappings. Inherently Parallel Algorithms in Feasibility and Optimization and Their Application (Haifa 2000) 2001.
Tian M: An application of hybrid steepest descent methods for equilibrium problems and strictly pseudocontractions in Hilbert spaces. J. Inequal. Appl. 2011., 2011: Article ID 173430
Tian M, Liu L: Iterative algorithm based on the viscosity approximation method for equilibrium and constrained convex minimization problem. Fixed Point Theory Appl. 2012., 2012: Article ID 201
Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63(1–4):123–145.
Zegeye H: An iterative approximation method for a common fixed point of two pseudocontractive mappings. ISRN Math. Anal. 2011., 2011: Article ID 621901
Combettes PL, Hirstoga SA: Equilibrium programming problems. J. Nonlinear Convex Anal. 2005, 6(1):117–136.
Xu HK: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002, 66(1):240–256. 10.1112/S0024610702003332
Goebel K, Kirk WA Cambridge Studies in Advanced Mathematics 38. In Topics on Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.
Baillon JB, Haddad G: Quelques propriétés des opérateurs angle - bornés et n -cycliquement monotones. Isr. J. Math. 1977, 26: 137–150. 10.1007/BF03007664
Chantarangsi W, Jaiboon C, Kumam P: A viscosity hybrid steepest descent method for generalized mixed equilibrium problems and variational inequalities for relaxed cocoercive mapping in Hilbert spaces. Abstr. Appl. Anal. 2010., 2010: Article ID 390972
Jaiboon C, Kumam P, Humphries UW: Weak convergence theorem by an extragradient method for variational inequality, equilibrium and fixed point problems. Bull. Malays. Math. Soc. 2009, 32(2):173–185.
Jaiboon C, Chantarangsi W, Kumam P: A convergence theorem based on a hybrid relaxed extragradient method for generalized equilibrium problems and fixed point problems of a finite family of nonexpansive mappings. Nonlinear Anal. Hybrid Syst. 2010, 4(1):199–215. 10.1016/j.nahs.2009.09.009
Jaiboon C, Kumam P, Humphries U: An extragradient method for relaxed cocoercive variational inequality and equilibrium problems. Anal. Theory Appl. 2009, 25(4):381–400. 10.1007/s10496-009-0381-8
Kumam W, Kumam P: Hybrid iterative scheme by relaxed extragradient method for solutions of equilibrium problems and a general system of variational inequalities with application to optimization. Nonlinear Anal. Hybrid Syst. 2009, 3: 640–656. 10.1016/j.nahs.2009.05.007
Kumam P: Strong convergence theorems by an extragradient methods for solving variational inequality and equilibrium problems in a Hilbert space. Turk. J. Math. 2011, 74: 5286–5302.
Onjai-uea N, Jaiboon C, Kumam P: A relaxed hybrid steepest descent method for common solutions of generalized mixed equilibrium problems and fixed point problems. Fixed Point Theory Appl. 2011., 2011: Article ID 32 10.1186/1687-1812-2011-32
Onjai-uea N, Jaiboon C, Kumam P, Humphries U: Convergence of iterative sequences for fixed points of an infinite family of nonexpansive mappings based on a hybrid steepest descent methods. J. Inequal. Appl. 2012., 2012: Article ID 101 10.1186/1029-242X-2012-101
Wirojana N, Jitpeera T, Kumam P: The hybrid steepest descent method for solving variational inequality over triple hierarchical problems. J. Inequal. Appl. 2012., 2012: Article ID 280 10.1186/1029-242X-2012-280
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally to the writing of this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Osilike, M.O., Ofoedu, E.U. & Attah, F.U. The hybrid steepest descent method for solutions of equilibrium problems and other problems in fixed point theory. Fixed Point Theory Appl 2014, 156 (2014). https://doi.org/10.1186/1687-1812-2014-156
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1687-1812-2014-156