Abstract
In this paper, we introduce and study a triple hierarchical variational inequality (THVI) with constraints of minimization and equilibrium problems. More precisely, let be the fixed point set of a nonexpansive mapping, let be the solution set of a mixed equilibrium problem (MEP), and let Γ be the solution set of a minimization problem (MP) for a convex and continuously Frechet differential functional in Hilbert spaces. We want to find a solution of a variational inequality with a variational inequality constraint over the intersection of , , and Γ. We propose a hybrid iterative algorithm with regularization to compute approximate solutions of the THVI, and we present the convergence analysis of the proposed iterative algorithm.
MSC:49J40, 47J20, 47H10, 65K05, 47H09.
Similar content being viewed by others
1 Introduction
Let H be a Hilbert space with inner product and norm over the real scalar field ℝ. Let C be a nonempty closed convex subset of H, and be the metric projection of H onto C. Let be a self-mapping on C. Denote by the set of fixed points of T. We say that T is L-Lipschitzian if there exists a constant such that
When or , we call T a nonexpansive or a contractive mapping, respectively. We say that a mapping is α-inverse strongly monotone if there exists a constant such that
and that A is η-strongly monotone (resp. monotone) if there exists a constant (resp. ) such that
It is known that T is nonexpansive if and only if is -inverse strongly monotone. Moreover, L-Lipschitz continuous mappings are -inversely strong monotone (see, e.g., [1]).
Let be a convex and continuously Frechet differentiable functional. Consider the minimization problem (MP):
(assuming the existence of minimizers). We denote by the set of minimizers of problem (1.1). The gradient-projection algorithm (GPA) generates a sequence determined by the gradient ∇f and the metric projection :
The convergence of algorithm (1.2) depends on ∇f. It is known that if ∇f is η-strongly monotone and L-Lipschitz continuous, then for , the operator
is a contraction. Hence, the sequence defined by the GPA (1.2) converges in norm to the unique solution of (1.1). If the gradient ∇f is only assumed to be Lipschitz continuous, then can only be weakly convergent when H is infinite-dimensional (a counterexample to the norm convergence of is given by Xu [[2], Section 5]).
The regularization, in particular the traditional Tikhonov regularization, is usually used to solve ill-posed optimization problems. Consider the regularized minimization problem
where is the regularization parameter, and again f is convex with Lipschitz continuous gradient ∇f. While a regularization method provides the possible strong convergence to the minimum-norm solution, its disadvantage is the implicity. Hence explicit iterative methods seem to be attractive. See, e.g., Xu [2, 3].
On the other hand, for a given mapping , we consider the variational inequality problem (VIP) of finding such that
The solution set of VIP (1.3) is denoted by . It is well known that when A is monotone,
Variational inequality theory has been studied quite extensively and has emerged as an important tool in several branches of pure and applied sciences; see, e.g., [1, 4–8] and the references therein.
When C is the fixed point set of a nonexpansive mapping T and , VIP (1.3) becomes the variational inequality problem of finding such that
This problem, introduced by Moudafi and Maingé [9, 10], is called the hierarchical fixed point problem. It is clear that if S has fixed points, then they are solutions of VIP (1.4). If S is contractive, the solution set of VIP (1.4) is a singleton and it is well known as a viscosity problem. This was previously introduced by Moudafi [11] and also developed by Xu [12]. In this case, solving VIP (1.4) is equivalent to finding a fixed point of the nonexpansive mapping , where is the metric projection onto the closed and convex set . Yao et al. [8] introduced a two-step algorithm to solve VIP (1.4).
Let be a bifunction and be a function. Consider the mixed equilibrium problem (MEP) of finding such that
which was studied by Ceng and Yao [13]. The solution set of MEP (1.5) is denoted by . The MEP (1.5) is very general in the sense that it includes, as special cases, fixed point problems, optimization problems, variational inequality problems, minimax problems, Nash equilibrium problems in noncooperative games and others; see, e.g., [13–15].
Recently, Iiduka [16, 17] considered a variational inequality with a variational inequality constraint over the set of fixed points of a nonexpansive mapping. Since this problem has a triple structure in contrast with bilevel programming problems or hierarchical constrained optimization problems or hierarchical fixed point problems, it is referred to as a triple hierarchical constrained optimization problem (THCOP). He presented some examples of THCOP and developed iterative algorithms to find the solution of such a problem. Since the original problem is a variational inequality, in this paper, we call it a triple hierarchical variational inequality (THVI). Ceng et al. introduced and considered some THVI in [18]. A nice survey article on THVI is [19]. See also [20–22].
Extending the works done in [18], we introduce and study in this paper the following triple hierarchical variational inequality with constraints of minimization and equilibrium problems.
The problem to study
Let C be a nonempty closed convex subset of a real Hilbert space H. Let be convex and continuously Frechet differentiable with Γ being the set of its minimizers. Let and be both nonexpansive. Let be ρ-contractive, and be κ-Lipschitzian and η-strongly monotone with constants and . Suppose and where .
Let Ξ denote the solution set of the following hierarchical variational inequality (HVI): find such that
where the solution set Ξ is assumed to be nonempty. Consider the following triple hierarchical variational inequality (THVI).
Find such that
Based on the iterative schemes provided by Xu [2] and the two-step iterative scheme provided by Yao et al. [8], by virtue of the viscosity approximation method, hybrid steepest-descent method and the regularization method, we propose the following hybrid iterative algorithm with regularization:
Here, , , and . It is shown that under appropriate assumptions, the two iterative sequences and converge strongly to the unique solution of the THVI (1.6).
2 Preliminaries
Let K be a nonempty closed convex subset of a real Hilbert space H. We write and to indicate that the sequence converges weakly and strongly to x, respectively. The weak ω-limit set of the sequence is denoted by
The metric (or nearest point) projection from H onto K is the mapping which assigns to each point the unique point satisfying the property
Proposition 2.1 For given and :
-
(i)
, ;
-
(ii)
, ;
-
(iii)
, .
Hence, is nonexpansive and monotone.
Definition 2.2 A mapping is said to be firmly nonexpansive if is nonexpansive, or equivalently,
Alternatively, T is firmly nonexpansive if and only if T can be expressed as
where is nonexpansive. Projections are firmly nonexpansive. We call T an averaged mapping if T can be expressed as a proper convex combination of the identity map I and a nonexpansive mapping. In particular, firmly nonexpansive mappings are averaged.
Proposition 2.3 (see [23])
Let be a given mapping.
-
(i)
T is nonexpansive if and only if the complement is -inverse strongly monotone.
-
(ii)
If T is ν-inverse strongly monotone, then so is γT for all .
-
(iii)
T is averaged if and only if the complement is ν-inverse strongly monotone for some . Indeed, for , T is α-averaged if and only if is -inverse strongly monotone.
Proposition 2.4 (see [23, 24])
Let .
-
(i)
If for some and if S is averaged and V is nonexpansive, then T is averaged.
-
(ii)
T is firmly nonexpansive if and only if the complement is firmly nonexpansive.
-
(iii)
If for some and if S is firmly nonexpansive and V is nonexpansive, then T is averaged.
-
(iv)
The composition of finitely many averaged mappings is averaged. In particular, if is -averaged and is -averaged, where , then is -averaged.
-
(v)
If the mappings are averaged and have a common fixed point, then
For solving the equilibrium problem for a bifunction , let us consider the following conditions:
-
(A1) for all ;
-
(A2) T is monotone, that is, for all ;
-
(A3) for each , ;
-
(A4) for each , is convex and lower semicontinuous;
-
(A5) for each , is weakly upper semicontinuous;
-
(B1) for each and , there exist a bounded subset and such that for any ,
-
(B2) C is a bounded set.
Lemma 2.5 (see [14])
Let C be a nonempty closed convex subset of a real Hilbert space H and be a bifunction satisfying (A1)-(A4). Let and . Then there exists such that
Lemma 2.6 (see [15])
Let C be a nonempty closed convex subset of a real Hilbert space H. Let be a bifunction satisfying (A1)-(A5) and be a proper lower semicontinuous and convex function. For and , define a mapping as follows:
for all . Assume that either (B1) or (B2) holds. Then is a single-valued firmly nonexpansive map on H, and is closed and convex.
Lemma 2.7 (see [25])
Let be a sequence of nonnegative real numbers such that
Here, , , and for all , such that
-
(i)
;
-
(ii)
either or ;
-
(iii)
.
Then .
Lemma 2.8 (Demiclosedness principle; see [1])
Let C be a nonempty closed convex subset of a real Hilbert space H and let be a nonexpansive mapping with . If is a sequence in C converging weakly to x and if converges strongly to y, then ; in particular, if , then .
Lemma 2.9 (see [12])
Let be a nonexpansive mapping and be a ρ-contraction with , respectively.
-
(i)
is monotone, i.e.,
-
(ii)
is -strongly monotone, i.e.,
Lemma 2.10 ([26])
Let H be a real Hilbert space. Then, for all and ,
Lemma 2.11 We have the following inequality in an inner product space X:
Notations Let λ be a number in and let . Let be κ-Lipschitzian and η-strongly monotone. Associated with a nonexpansive mapping , we define the mapping by
Lemma 2.12 (see [[27], Lemma 3.1])
The map is a contraction provided , that is,
where . In particular, if the identity mapping, then
A set-valued mapping is called monotone if for all , and . A monotone set-valued mapping is called maximal if its graph is not properly contained in the graph of any other monotone set-valued mapping. It is known that a monotone set-valued mapping is maximal if and only if for , for every implies that .
Let be a monotone and Lipschitz continuous mapping and let be the normal cone to C at , namely
Define
Lemma 2.13 (see [28])
Let be a monotone mapping.
-
(i)
is maximal monotone;
-
(ii)
.
3 Main results
Let us consider the following three-step iterative scheme with regularization:
Here,
-
is a ρ-contraction;
-
and are nonexpansive mappings;
-
is a κ-Lipschitzian and η-strongly monotone mapping;
-
and are real-valued functions;
-
is L-Lipschitz continuous with ;
-
and are sequences in with and ;
-
and are sequences in ;
-
and , where .
Theorem 3.1 Suppose that satisfies (A1)-(A5) and that (B1) or (B2) holds. Let be the bounded sequence generated from any given by (3.1). Assume that
(H1) , ;
(H2) and ;
(H3) , , and ;
(H4) and .
Then we have the following:
-
(i)
;
-
(ii)
;
-
(iii)
if held in addition, i.e., .
Proof First, let us show that is ξ-averaged for each , where
The Lipschitz condition implies that the gradient ∇f is -inverse strongly monotone [1], that is,
Observe that
Hence, is -inverse strongly monotone. Thus, is -inverse strongly monotone by Proposition 2.3(ii). By Proposition 2.3(iii) the complement is -averaged. Noting that is -averaged and utilizing Proposition 2.4(iv), we know that for each , the map is ξ-averaged with
In particular, is nonexpansive. Furthermore, for , utilizing the fact that , we may assume
Consequently, for each integer , is -averaged with
This immediately implies that is nonexpansive for all .
We divide the proof into several steps.
Step 1. .
For simplicity, put . Then and for every . We observe that
Moreover, from (3.1) we have
Thus
Utilizing Lemma 2.12 from (3.2) we deduce that
where . Taking into consideration that and , we have
and
Putting in (3.4) and in (3.5), we obtain
and
Adding the last two inequalities, by (A2) we get
and hence
Since , we may assume, without loss of generality, that there exists a positive number c such that for all . Thus we have
and hence
Here, .
Substituting (3.6) into (3.3) we derive
Here, for some .
On the other hand, from (3.1) we have
Simple calculations show that
Utilizing Lemma 2.12 from (3.2), (3.6), and (3.7) we deduce that
where , for some . Therefore,
where , for some . From (H1), (H2), and (H4), it follows that and
Applying Lemma 2.7 to (3.8), we immediately conclude that
In particular, from (H3) it follows that
Step 2. and .
By the firm nonexpansivity of , if , we have
This immediately yields
Let . We have
Note that
Hence we have
By Lemmas 2.10 and 2.12, we have from (3.9) and (3.10) that
Furthermore, utilizing Lemmas 2.11 and 2.12 we have from (3.9) and (3.10) that
It turns out therefore that
Then it is clear that
Since , , , and , we conclude that
Furthermore,
This yields
Since , , and as , we have
and
Therefore, from the last inequality we have
Step 3. and .
Let . Utilizing Lemmas 2.6 and 2.11 we have from (3.12) that
Hence,
Since , , , and , it follows from that , and hence
Furthermore, from the firm nonexpansiveness of we obtain
Consequently,
Thus, from (3.12) we have
This implies that
Since , , , and , it follows that
This, together with and (due to Step 2), implies that
and thus
Step 4. ; moreover, if in addition, then .
Let . Then there exists a subsequence of such that . Since
we have
Hence from , , , and , we get
Since and , we have . Utilizing Lemma 2.8 we derive .
Let us show that . As a matter of fact, since , for any we have
It follows from (A2) that
Replacing n by , we have
Since and , it follows from (A4) that
Put for all and . We have and
Utilizing (A1), (A4), and (3.13), we have
and hence
Letting in (3.14) and utilizing (A3), we get, for each ,
Hence, .
Let us show that . From and , we know that and . Define
where
Then is maximal monotone and if and only if ; see [28] for more details. Let . Then we have
and hence,
Therefore,
On the other hand, from
we have
and hence
Therefore, from
we have
Hence, we obtain
Since is maximal monotone, we have , and hence, , which leads to . Consequently, . This shows that .
Utilizing Lemmas 2.11 and 2.12, we have for every ,
Suppose now that in addition. It follows from (3.15) that
This, together with and , leads to
Observe that
So, it follows from that
Note also that and
It is clear that
Hence, it follows from that is monotone. Since , by Minty’s lemma [1] we have
that is, . This shows that . □
Theorem 3.2 Assuming the conditions in Theorem 3.1. We have:
-
(i)
and both converge strongly to an element , which is a unique solution of the variational inequality
-
(ii)
and both converge strongly to a unique solution of THVI (1.6) if in addition.
Proof Utilizing Lemmas 2.11 and 2.12 we get from (3.15)
where .
Note that is -Lipschitzian and -strongly monotone, namely
and
Hence there exists a unique solution of the variational inequality problem
Since the sequence is bounded, there exists a subsequence of such that
Also, since H is reflexive and is bounded, without loss of generality we may assume that (due to Theorem 3.1(i)). Taking into consideration that is the unique solution of VIP (3.17), we obtain from (3.18)
Putting , from (3.16) we conclude that
Since , , and as , it follows from (3.19) that , , and
Applying Lemma 2.7 to (3.20), we get
This, together with , implies that
From now on, we suppose that . Then by Theorem 3.1(ii) we know that . Since is -Lipschitzian and -strongly monotone, there exists a unique solution of the variational inequality problem
Since the sequence is bounded, there exists a subsequence of such that
Again, since H is reflexive and is bounded, without loss of generality we may assume that (due to Theorem 3.1(ii)). Taking into account that is the unique solution of VIP (3.21), we deduce from (3.22) that
Putting , from (3.16) we immediately infer that
Repeating the same arguments as above, we can readily see that
which, together with , yields
This completes the proof. □
Remark 3.3 Our iterative algorithm (3.1) is very different from Xu’s iterative ones in [2], and Yao et al.’s iterative one in [8]. Here, the two-step iterative scheme in [8] for two nonexpansive mappings and the gradient-projection iterative schemes in [2] for MP (1.1) are extended to develop our three-step iterative scheme (3.1) with regularization for the THVI (1.6). It is worth pointing out that without assuming the conditions that and that , for some constant , our three-step iterative scheme (3.1) converges strongly to an element , which is a unique solution of the variational inequality
See Theorem 3.2(i).
Remark 3.4 As an example, we consider the following sequences:
-
(a)
, , and where and or , ;
-
(b)
.
They satisfy the hypotheses on the parameter sequences in Theorems 3.1 and 3.2.
Remark 3.5 Our Theorems 3.1 and 3.2 improve, extend, supplement, and develop [[8], Theorems 3.1 and 3.2] and [[2], Theorems 5.2 and 6.1] in the following aspects:
-
(a)
Our THVI (1.6) with the unique solution satisfying
is more general than the problem of finding an element satisfying in [8] and the problem of finding an element in [2].
-
(b)
Our three-step iterative algorithm (3.1) for THVI (1.6) is more flexible, more advantageous and more subtle than Xu’s iterative ones in [2] and than Yao et al.’s two-step iterative one in [8], because, e.g., it drops the requirement of , for some in [[8], Theorem 3.2(v)].
-
(c)
The arguments and techniques in our Theorems 3.1 and 3.2 are very different from the ones in [[8], Theorems 3.1 and 3.2] and in [[2], Theorems 5.2 and 6.1] because we utilize the properties of resolvent operators and maximal monotone mappings (Lemmas 2.5, 2.6 and 2.13), the convergence criteria of real sequences (Lemma 2.7), and the contractive coefficient estimates for the contractions associated with nonexpansive mappings (Lemma 2.12).
-
(d)
Compared with the proofs of [[2], Theorems 5.2 and 6.1], the proofs of our Theorems 3.1 and 3.2 derive via the argument showing , (see Step 3 in the proof of Theorem 3.1).
References
Goebel K, Kirk WA Cambridge Studies in Advanced Mathematics 28. In Topics on Metric Fixed Point Theory. Cambridge University Press, Cambridge; 1990.
Xu H-K: Averaged mappings and the gradient-projection algorithm. J. Optim. Theory Appl. 2011, 150: 360–378. 10.1007/s10957-011-9837-z
Xu H-K: Iterative methods for the split feasibility problem in infinite-dimensional Hilbert spaces. Inverse Problems 2010., 26: Article ID 105018
Ceng L-C, Wang C-Y, Yao J-C: Strong convergence theorems by a relaxed extragradient method for a general system of variational inequalities. Math. Methods Oper. Res. 2008, 67: 375–390. 10.1007/s00186-007-0207-4
Ceng L-C, Yao J-C: An extragradient-like approximation method for variational inequality problems and fixed point problems. Appl. Math. Comput. 2007, 190: 205–215. 10.1016/j.amc.2007.01.021
Glowinski R: Numerical Methods for Nonlinear Variational Problems. Springer, New York; 1984.
Kinderlehrer D, Stampacchia G: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York; 1980.
Yao Y, Liou Y-C, Marino G: Two-step iterative algorithms for hierarchical fixed point problems and variational inequality problems. J. Appl. Math. Comput. 2009,31(1–2):433–445. 10.1007/s12190-008-0222-5
Moudafi A, Maingé P-E: Towards viscosity approximations of hierarchical fixed points problems. Fixed Point Theory Appl. 2006., 2006: Article ID 95453
Moudafi A, Maingé P-E: Strong convergence of an iterative method for hierarchical fixed point problems. Pac. J. Optim. 2007,3(3):529–538.
Moudafi A: Viscosity approximation methods for fixed-points problems. J. Math. Anal. Appl. 2000,241(1):46–55. 10.1006/jmaa.1999.6615
Xu H-K: Viscosity approximation methods for nonexpansive mappings. J. Math. Anal. Appl. 2004, 298: 279–291. 10.1016/j.jmaa.2004.04.059
Ceng L-C, Yao J-C: A hybrid iterative scheme for mixed equilibrium problems and fixed point problems. J. Comput. Appl. Math. 2008, 214: 186–201. 10.1016/j.cam.2007.02.022
Blum E, Oettli W: From optimization and variational inequalities to equilibrium problems. Math. Stud. 1994, 63: 123–145.
Peng J-W, Yao J-C: A new hybrid-extragradient method for generalized mixed equilibrium problems, fixed point problems and variational inequality problems. Taiwan. J. Math. 2008, 12: 1401–1432.
Iiduka H: Strong convergence for an iterative method for the triple-hierarchical constrained optimization problem. Nonlinear Anal. 2010, 71: 1292–1297.
Iiduka H: Iterative algorithm for solving triple-hierarchical constrained optimization problem. J. Optim. Theory Appl. 2011, 148: 580–592. 10.1007/s10957-010-9769-z
Ceng L-C, Ansari QH, Yao J-C: Iterative methods for triple hierarchical variational inequalities in Hilbert spaces. J. Optim. Theory Appl. 2011, 151: 489–512. 10.1007/s10957-011-9882-7
Ansari QH, Ceng L-C, Gupta H: Triple hierarchical variational inequalities. In Nonlinear Analysis: Approximation Theory, Optimization and Applications. Edited by: Ansari QH. Birkhäuser, Basel; 2014:231–280.
Ceng L-C, Ansari QH, Wen C-F: Hybrid steepest-descent viscosity method for triple hierarchical variational inequalities. Abstr. Appl. Anal. 2012., 2012: Article ID 907105
Ceng L-C, Ansari QH, Yao J-C: Relaxed hybrid steepest-descent methods with variable parameters for triple-hierarchical variational inequalities. Appl. Anal. 2012,91(10):1793–1810. 10.1080/00036811.2011.614602
Kong Z-R, Ceng L-C, Pang CT, Ansari QH: Multi-step hybrid extragradient method for triple hierarchical variational inequalities. Abstr. Appl. Anal. 2013., 2013: Article ID 718624
Byrne C: A unified treatment of some iterative algorithms in signal processing and image reconstruction. Inverse Problems 2004, 20: 103–120. 10.1088/0266-5611/20/1/006
Combettes PL: Solving monotone inclusions via compositions of nonexpansive averaged operators. Optimization 2004,53(5–6):475–504. 10.1080/02331930412331327157
Xu H-K: Iterative algorithms for nonlinear operators. J. Lond. Math. Soc. 2002,66(2):240–256.
Reineermann J: Über fixpunkte kontrahierender abbildungen und schwach konvergente Toeplitz-verfahren. Arch. Math. (Basel) 1969, 20: 59–64. 10.1007/BF01898992
Xu H-K, Kim T-H: Convergence of hybrid steepest-descent methods for variational inequalities. J. Optim. Theory Appl. 2003, 119: 185–201.
Rockafellar RT: On the maximality of sums of nonlinear monotone operators. Trans. Am. Math. Soc. 1970, 149: 75–88. 10.1090/S0002-9947-1970-0282272-5
Acknowledgements
Lu-Chuan Ceng is partially supported by the National Science Foundation of China (11071169), Innovation Program of Shanghai Municipal Education Commission (09ZZ133) and Leading Academic Discipline Project of Shanghai Normal University (DZL707). Ngai-Ching Wong is partially supported by the Taiwan MOST grant 102-2115-M-110-002-MY2. Jen-Chih Yao is partially supported by the Taiwan MOST grant 102-2111-E-037-004-MY3. Both Ngai-Ching Wong and Jen-Chih Yao are also partially supported by the NSYSU-KMU joint venture 103-P013.
Author information
Authors and Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ contributions
All authors contributed equally and significantly in writing this paper. All authors read and approved the final manuscript.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made.
The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
To view a copy of this licence, visit https://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Ceng, LC., Wong, NC. & Yao, JC. Regularized hybrid iterative algorithms for triple hierarchical variational inequalities. J Inequal Appl 2014, 490 (2014). https://doi.org/10.1186/1029-242X-2014-490
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/1029-242X-2014-490