Skip to main content
Log in

Convergence analysis of projection method for variational inequalities

  • Published:
Computational and Applied Mathematics Aims and scope Submit manuscript

Abstract

The main contributions of this paper are the proposition and the convergence analysis of a class of inertial projection-type algorithm for solving variational inequality problems in real Hilbert spaces where the underline operator is monotone and uniformly continuous. We carry out a unified analysis of the proposed method under very mild assumptions. In particular, weak convergence of the generated sequence is established and nonasymptotic O(1 / n) rate of convergence is established, where n denotes the iteration counter. We also present some experimental results to illustrate the profits gained by introducing the inertial extrapolation steps.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  • Alvarez F (2004) Weak convergence of a relaxed and inertial hybrid projection-proximal point algorithm for maximal monotone operators in Hilbert space. SIAM J Optim 14:773–782

    Article  MathSciNet  MATH  Google Scholar 

  • Alvarez F, Attouch H (2001) An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal 9:3–11

    Article  MathSciNet  MATH  Google Scholar 

  • Attouch H, Goudon X, Redont P (2000) The heavy ball with friction. I. The continuous dynamical system. Commun Contemp Math 2(1):1–34

    Article  MathSciNet  MATH  Google Scholar 

  • Attouch H, Czarnecki MO (2002) Asymptotic control and stabilization of nonlinear oscillators with non-isolated equilibria. J Differ Equ 179(1):278–310

    Article  MathSciNet  MATH  Google Scholar 

  • Attouch H, Peypouquet J, Redont P (2014) A dynamical approach to an inertial forward-backward algorithm for convex minimization. SIAM J Optim 24:232–256

    Article  MathSciNet  MATH  Google Scholar 

  • Attouch H, Peypouquet J (2016) The rate of convergence of Nesterov’s accelerated forward-backward method is actually faster than \(\frac{1}{k^2}\). SIAM J Optim 26:1824–1834

    Article  MathSciNet  MATH  Google Scholar 

  • Aubin J-P, Ekeland I (1984) Applied nonlinear analysis. Wiley, New York

    MATH  Google Scholar 

  • Baiocchi C, Capelo A (1984) Variational and quasivariational inequalities. In: Applications to free boundary problems, Wiley, New York

  • Bauschke HH, Combettes PL (2011) Convex analysis and monotone operator theory in Hilbert spaces. CMS Books in Mathematics. Springer, New York

  • Beck A, Teboulle M (2009) A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J Imaging Sci 2(1):183–202

    Article  MathSciNet  MATH  Google Scholar 

  • Bot RI, Csetnek ER, Hendrich C (2015) Inertial Douglas–Rachford splitting for monotone inclusion. Appl Math Comput 256:472–487

    MathSciNet  MATH  Google Scholar 

  • Bot RI, Csetnek ER (2016) An inertial alternating direction method of multipliers. Minimax Theory Appl 1:29–49

    MathSciNet  MATH  Google Scholar 

  • Bot RI, Csetnek ER (2016) An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer Algorithms 71:519–540

    Article  MathSciNet  MATH  Google Scholar 

  • Cegielski A (2012) Iterative methods for fixed point problems in Hilbert spaces, vol 2057. Lecture notes in mathematics. Springer, Berlin

  • Ceng LC, Hadjisavvas N, Wong N-C (2010) Strong convergence theorem by a hybrid extragradient-like approximation method for variational inequalities and fixed point problems. J Glob Optim 46:635–646

    Article  MathSciNet  MATH  Google Scholar 

  • Ceng LC, Yao JC (2007) An extragradient-like approximation method for variational inequality problems and fixed point problems. Appl Math Comput 190:205–215

    MathSciNet  MATH  Google Scholar 

  • Censor Y, Gibali A, Reich S (2011) Strong convergence of subgradient extragradient methods for the variational inequality problem in Hilbert space. Optim Methods Softw 26:827–845

    Article  MathSciNet  MATH  Google Scholar 

  • Censor Y, Gibali A, Reich S (2011) The subgradient extragradient method for solving variational inequalities in Hilbert space. J Optim Theory Appl 148:318–335

    Article  MathSciNet  MATH  Google Scholar 

  • Chen C, Chan RH, Ma S, Yang J (2015) Inertial proximal ADMM for linearly constrained separable convex optimization. SIAM J Imaging Sci 8:2239–2267

    Article  MathSciNet  MATH  Google Scholar 

  • Denisov S, Semenov V, Chabak L (2015) Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern Syst Anal 51:757–765

    Article  MathSciNet  MATH  Google Scholar 

  • Dong QL, Cho YJ, Zhong LL, Rassias ThM (2018) Inertial projection and contraction algorithms for variational inequalities. J Glob Optim 70:687–704

    Article  MathSciNet  MATH  Google Scholar 

  • Glowinski R, Lions J-L, Trémolières R (1981) Numerical analysis of variational inequalities. North-Holland, Amsterdam

    MATH  Google Scholar 

  • Goebel K, Reich S (1984) Uniform convexity, hyperbolic geometry, and nonexpansive mappings. Marcel Dekker, New York

    MATH  Google Scholar 

  • Harker PT, Pang J-S (1990) A damped-Newton method for the linear complementarity problem. In: Allgower G, Georg K (eds) Computational solution of nonlinear systems of equations, lectures in Appl. Math., vol 26. AMS, Providence, pp 265–284

  • He YR (2006) A new double projection algorithm for variational inequalities. J Comput Appl Math 185:166–173

    Article  MathSciNet  MATH  Google Scholar 

  • Hieu DV, Anh PK, Muu LD (2017) Modified hybrid projection methods for finding common solutions to variational inequality problems. Comput Optim Appl 66:75–96

    Article  MathSciNet  MATH  Google Scholar 

  • Iusem AN, Gárciga Otero R (2001) Inexact versions of proximal point and augmented Lagrangian algorithms in Banach spaces. Numer Funct Anal Optim 22:609–640

    Article  MathSciNet  MATH  Google Scholar 

  • Iusem AN, Nasri M (2011) Korpelevich’s method for variational inequality problems in Banach spaces. J Glob Optim 50:59–76

    Article  MathSciNet  MATH  Google Scholar 

  • Khobotov EN (1989) Modification of the extragradient method for solving variational inequalities and certain optimization problems. USSR Comput Math Math Phys 27:120–127

    Article  MATH  Google Scholar 

  • Kinderlehrer D, Stampacchia G (1980) An introduction to variational inequalities and their applications. Academic Press, New York

    MATH  Google Scholar 

  • Konnov IV (2001) Combined relaxation methods for variational inequalities. Springer, Berlin

    Book  MATH  Google Scholar 

  • Korpelevich GM (1976) The extragradient method for finding saddle points and other problems. Ékon Mat Metody 12:747–756

    MathSciNet  MATH  Google Scholar 

  • Lorenz DA, Pock T (2015) An inertial forward-backward algorithm for monotone inclusions. J Math Imaging Vis 51:311–325

    Article  MathSciNet  MATH  Google Scholar 

  • Maingé PE (2008) Regularized and inertial algorithms for common fixed points of nonlinear operators. J Math Anal Appl 344:876–887

    Article  MathSciNet  MATH  Google Scholar 

  • Maingé P-E, Gobinddass ML (2016) Convergence of one-step projected gradient methods for variational inequalities. J Optim Theory Appl 171:146–168

    Article  MathSciNet  MATH  Google Scholar 

  • Malitsky YuV (2015) Projected reflected gradient methods for monotone variational inequalities. SIAM J Optim 25:502–520

    Article  MathSciNet  MATH  Google Scholar 

  • Malitsky YuV, Semenov VV (2015) A hybrid method without extrapolation step for solving variational inequality problems. J Glob Optim 61:193–202

    Article  MathSciNet  MATH  Google Scholar 

  • Marcotte P (1991) Applications of Khobotov’s algorithm to variational and network equlibrium problems. Inf Syst Oper Res 29:258–270

    MATH  Google Scholar 

  • Mashreghi J, Nasri M (2010) Forcing strong convergence of Korpelevich’s method in Banach spaces with its applications in game theory. Nonlinear Anal 72:2086–2099

    Article  MathSciNet  MATH  Google Scholar 

  • Nadezhkina N, Takahashi W (2006) Strong convergence theorem by a hybrid method for nonexpansive mappings and Lipschitz-continuous monotone mappings. SIAM J Optim 16:1230–1241

    Article  MathSciNet  MATH  Google Scholar 

  • Ochs P, Brox T, Pock T (2015) iPiasco: inertial proximal algorithm for strongly convex optimization. J Math Imaging Vis 53:171–181

    Article  MathSciNet  MATH  Google Scholar 

  • Polyak BT (1964) Some methods of speeding up the convergence of iterarive methods. Zh Vychisl Mat Mat Fiz 4:1–17

    Google Scholar 

  • Solodov MV, Svaiter BF (1999) A new projection method for variational inequality problems. SIAM J Control Optim 37:765–776

    Article  MathSciNet  MATH  Google Scholar 

  • Takahashi W (2000) Nonlinear functional analysis. Yokohama Publishers, Yokohama

    MATH  Google Scholar 

  • Thong DV, Hieu DV (2018) Modified subgradient extragradient method for variational inequality problems. Numer Algor 79(2):597–610

    Article  MathSciNet  MATH  Google Scholar 

  • Tseng P (2000) A modified forward-backward splitting method for maximal monotone mappings. SIAM J Control Optim 38:431–446

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yekini Shehu.

Additional information

Communicated by Carlos Conca.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

In this case, we present a version of Lemma 4.3 for the case when F is pseudo-monotone.

Lemma 8.1

Let F be pseudo-monotone, uniformly continuous and sequentially weakly continuous on H. Assume that Assumption 3.2 holds. Furthermore, let \(\{x_{n_k}\}\) be a subsequence of \( \{x_n\}\) converging weakly to a limit point p. Then, \( p \in \text {SOL} \).

Proof

By the definition of \(z_{n_k}\) together with (6), we have

$$\begin{aligned} \langle w_{n_k}-F(w_{n_k})-z_{n_k},x-z_{n_k}\rangle \le 0, \quad \forall \, x\in C, \end{aligned}$$

which implies that

$$\begin{aligned} \langle w_{n_k}-z_{n_k},x-z_{n_k}\rangle \le \langle F(w_{n_k}),x-z_{n_k} \rangle ,\quad \forall \, x \in C. \end{aligned}$$

Hence,

$$\begin{aligned} \langle w_{n_k}-z_{n_k},x-z_{n_k}\rangle +\langle F(w_{n_k}),z_{n_k}-w_{n_k}\rangle \le \langle F(w_{n_k}),x-w_{n_k}\rangle ,\quad \forall \, x\in C.\qquad \end{aligned}$$
(39)

Fix \(x \in C\) and let \(k\rightarrow \infty \) in (39). Since \(\lim _{k \rightarrow \infty } \Vert w_{n_k}-z_{n_k}\Vert =0 \), we have

$$\begin{aligned} 0\le \liminf _{k \rightarrow \infty } \langle F(w_{n_k}),x-w_{n_k}\rangle \end{aligned}$$
(40)

for all \( x \in C \). Now, we choose a sequence \(\{\epsilon _k\}_k\) of positive numbers decreasing and tending to 0. For each \(\epsilon _k\), we denote by \(N_k\) the smallest positive integer such that

$$\begin{aligned} \left\langle F(w_{n_j}), x- w_{n_j} \right\rangle + \epsilon _k \ge 0 \quad \forall j \ge N_k, \end{aligned}$$
(41)

where the existence of \(N_k\) follows from (40). Since \(\left\{ \epsilon _k \right\} \) is decreasing, it is easy to see that the sequence \( \left\{ N_k \right\} \) is increasing. Furthermore, for each k, \(F(w_{N_k}) \not = 0\) and, setting

$$\begin{aligned} v_{N_k}=\frac{F(w_{N_k})}{\Vert F(w_{N_k}\Vert ^2}, \end{aligned}$$

we have \( \left\langle F(w_{N_k}),v_{N_k} \right\rangle =1\) for each k. Now we can deduce from (41) that for each k

$$\begin{aligned} \left\langle F(w_{N_k}),x+\epsilon _k v_{N_k} -w_{N_k} \right\rangle \ge 0, \end{aligned}$$

and, since F is pseudo-monotone, that

$$\begin{aligned} \left\langle F(x+\epsilon _k v_{N_k}),x+\epsilon _k v_{N_k} -w_{N_k} \right\rangle \ge 0. \end{aligned}$$
(42)

On the other hand, we have that \(\left\{ x_{n_k} \right\} \) converges weakly to p when \(k \rightarrow \infty \). Since F is sequentially weakly continuous on C, \(\left\{ F(w_{n_k}) \right\} \) converges weakly to F(p). We can suppose that \(F(p) \not = 0\) (otherwise, p is a solution). Since the norm mapping is sequentially weakly lower semicontinuous, we have

$$\begin{aligned} 0<\Vert F(p) \Vert \le \lim \inf _{k \rightarrow \infty } \Vert F(w_{n_k})\Vert . \end{aligned}$$

Since \(\left\{ w_{N_k} \right\} \subset \left\{ w_{n_k} \right\} \) and \(\epsilon _k \rightarrow 0\) as \(k \rightarrow \infty \), we obtain

$$\begin{aligned} 0\le & {} \lim \sup _{k \rightarrow \infty } \Vert \epsilon _k v_{N_k}\Vert = \lim \sup _{k \rightarrow \infty } \Big (\frac{\epsilon _k}{\Vert F(w_{n_k})\Vert }\Big )\\\le & {} \frac{\lim \sup _{k \rightarrow \infty }\epsilon _k}{\lim \inf _{k \rightarrow \infty }\Vert F(w_{n_k})\Vert } \le \frac{0}{\Vert F(p)\Vert }=0, \end{aligned}$$

which implies that \(\lim _{k \rightarrow \infty } \Vert \epsilon _k v_{N_k}\Vert =0\). Hence, taking the limit as \(k \rightarrow \infty \) in (42), we obtain

$$\begin{aligned} \left\langle F(x),x-p \right\rangle \ge 0. \end{aligned}$$

Now, using Lemma 2.2 of Mashreghi and Nasri (2010), we have that \(p\in \text {SOL}\). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shehu, Y., Iyiola, O.S., Li, XH. et al. Convergence analysis of projection method for variational inequalities. Comp. Appl. Math. 38, 161 (2019). https://doi.org/10.1007/s40314-019-0955-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s40314-019-0955-9

Keywords

Mathematics Subject Classification

Navigation