Skip to main content
Log in

A Projected Extrapolated Gradient Method with Larger Step Size for Monotone Variational Inequalities

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

A projected extrapolated gradient method is designed for solving monotone variational inequality in Hilbert space. Requiring local Lipschitz continuity of the operator, our proposed method improves the value of the extrapolated parameter and admits larger step sizes, which are predicted based a local information of the involved operator and corrected by bounding the distance between each pair of successive iterates. The correction will be implemented when the distance is larger than a given constant and its main cost is to compute a projection onto the feasible set. In particular, when the operator is the gradient of a convex function, the correction step is not necessary. We establish the convergence and ergodic convergence rate in theory under the larger range of parameters. Related numerical experiments illustrate the improvements in efficiency from the larger step sizes.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Notes

  1. All codes are available at https://github.com/cxk9369010/PEG.

References

  1. Antipin, A.S.: On a method for convex programs using a symmetrical modification of the Lagrange function. Ekonomika i Matematicheskie Metody 12(6), 1164–1173 (1976)

    Google Scholar 

  2. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, Berlin, New York (2011)

    Book  Google Scholar 

  3. Bertsekas, D.P., Gafni, E.M.: Projection methods for variational inequalities with applications to the traffic assignment problem. Math. Program. Study 17, 139–159 (1982)

    Article  MathSciNet  Google Scholar 

  4. Boţ, R.I., Csetnek, E.R.: Forward-backward and Tseng’s type penalty schemes for monotone inclusion problems. Set-Valued Var. Anal. 22, 313–331 (2014)

    Article  MathSciNet  Google Scholar 

  5. Boţ, R.I., Csetnek, E.R.: An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer. Algor. 71, 519–540 (2016)

    Article  MathSciNet  Google Scholar 

  6. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)

    Article  Google Scholar 

  7. Burachik, R.S., Lopes, J.O., Svaiter, B.F.: An outer approximation method for the variational inequality problem. SIAM J. Control Optim. 43(6), 2071–2088 (2005)

    Article  MathSciNet  Google Scholar 

  8. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148, 318–335 (2011)

    Article  MathSciNet  Google Scholar 

  9. Chang, X., Bai, J., Song, D., Liu, S.: Linearized symmetric multi-block ADMM with indefinite proximal regularization and optimal proximal parameter. Calcolo, 57(4), Article number: 38 (2020)

  10. Chang, X., Liu, S., Zhao, P., Li, X.: Convergent prediction-correction-based ADMM for multi-block separable convex programming. J. Comput. Appl. Math. 335, 270–288 (2018)

    Article  MathSciNet  Google Scholar 

  11. Chang, X., Liu, S., Zhao, P., Song, D.: A generalization of linearized alternating direction method of multipliers for solving two-block separable convex programming. J. Comput. Appl. Math. 357, 251–272 (2019)

    Article  MathSciNet  Google Scholar 

  12. Combettes, P.L., Băng, C.V.: Variable metric forward-backward splitting with applications to monotone inclusions in duality. Optimization 63(9), 1289–1318 (2014)

    Article  MathSciNet  Google Scholar 

  13. Denisov, S., Semenov, V., Chabak, L.: Convergence of the modified extragradient method for variational inequalities with non-Lipschitz operators. Cybern. Syst. Anal. 51, 757–765 (2015)

    Article  MathSciNet  Google Scholar 

  14. Ekeland, I., Temam, R.: Convex Analysis and Variational Problems. North-Holland, Amsterdam, Holland (1976)

    MATH  Google Scholar 

  15. Facchinei, F., Pang, J.-S.: Finite-dimensional variational inequalities and complementarity problem. Springer, New York (2003)

    MATH  Google Scholar 

  16. Gibali, A.: A new non-Lipschitzian projection method for solving variational inequalities in Euclidean spaces. J. Nonlinear Anal. Optim. 6, 41–51 (2015)

    MathSciNet  MATH  Google Scholar 

  17. He, B., Yuan, X.: A class of ADMM-based algorithms for multi-block separable convex programming. Comput. Optim. Appl. 70(3), 791–826 (2018)

    Article  MathSciNet  Google Scholar 

  18. Huang, Y., Dong, Y.: New properties of forward-backward splitting and a practical proximal-descent algorithm. Appl. Math. Comput. 237, 60–68 (2014)

    MathSciNet  MATH  Google Scholar 

  19. Iusem, A.N., Pérez, L.R.: An extragradient-type algorithm for nonsmooth variational inequalities. Optimization 48, 309–332 (2000)

    Article  MathSciNet  Google Scholar 

  20. Iusem, A.N., Svaiter, B.F.: A variant of Korpelevich’s method for variational inequalities with a new search strategy. Optimization 42, 309–321 (1997)

    Article  MathSciNet  Google Scholar 

  21. Khobotov, E.N.: Modification of the extragradient method for solving variational inequalities and certain optimization problems. USSR Comp. Math. Phys. 27, 120–127 (1987)

    Article  MathSciNet  Google Scholar 

  22. Korpelevich, G.M.: The extragradient method for finding saddle points and other problem. Ekonomika i Matematicheskie Metody 12, 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  23. Liang, J., Fadili, J., Peyré, G.: Activity identification and local linear convergence of forward-backward-type methods. SIAM J. Optim. 27(1), 408–437 (2017)

    Article  MathSciNet  Google Scholar 

  24. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16(6), 964–979 (1979)

    Article  MathSciNet  Google Scholar 

  25. Lorenz, D., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51, 311–325 (2015)

    Article  MathSciNet  Google Scholar 

  26. Lyashko, S.I., Semenov, V.V., Voitova, T.A.: Low-cost modification of Korpelevich’s method for monotone equilibrium problems. Cybern. Syst. Anal. 47, 631–639 (2011)

    Article  MathSciNet  Google Scholar 

  27. Mainge, P.E., Gobinddass, M.L.: Convergence of one-step projected gradient methods for variational inequalities. J. Optim. Theory Appl. 171, 146–168 (2016)

    Article  MathSciNet  Google Scholar 

  28. Malitsky, Y.V.: Projected reflected gradient methods for variational inequalities. SIAM J. Optim. 25(1), 502–520 (2015)

    Article  MathSciNet  Google Scholar 

  29. Malitsky, Y.V.: Proximal extrapolated gradient methods for variational inequalities. Optim. Methods Soft. 33(1), 140–164 (2018)

    Article  MathSciNet  Google Scholar 

  30. Malitsky, Y.V., Semenov, V.V.: An extragradient algorithm for monotone variational inequalities. Cybern. Syst. Anal. 50, 271–277 (2014)

    Article  MathSciNet  Google Scholar 

  31. Monteiro, R.D., Svaiter, B.F.: Complexity of variants of Tseng’s modified FB splitting and Korpelevich’s methods for hemivariational inequalities with applications to saddle-point and convex optimization problems. SIAM J. Optim. 21, 1688–1720 (2011)

    Article  MathSciNet  Google Scholar 

  32. Nesterov, Y.: A method of solving a convex programming problem with convergence rate \({{\cal{O}}}(1/k^2)\). Soviet Math. Doklady 27(2), 372–376 (1983)

    MATH  Google Scholar 

  33. Nesterov, Y.: Introductory lectures on convex optimization: A basic course. Kluwer academic publishers, Boston (2004)

    Book  Google Scholar 

  34. Noor, M.A.: Modified projection method for pseudomonotone variational inequalities. Appl. Math. Lett. 15, 315–320 (2002)

    Article  MathSciNet  Google Scholar 

  35. Opial, Z.: Weak convergence of the sequence of successive approximations for nonexpansive mappings. Bull. Am. Math. Soc. 73(4), 591–597 (1967)

    Article  MathSciNet  Google Scholar 

  36. Pang, J.S., Gabriel, S.A.: NE/SQP: A robust algorithm for the nonlinear complementarity problem. Math. Program. 60(1–3), 295–337 (1993)

    Article  MathSciNet  Google Scholar 

  37. Rockafellar, R.T.: Generalized directional derivatives and subgradients of nonconvex functions. Can. J. Math. 32, 257–280 (1980)

    Article  MathSciNet  Google Scholar 

  38. Solodov, M.V., Svaiter, B.F.: A new projection method for variational inequality problems. SIAM J. Control Optim. 37, 765–776 (1999)

    Article  MathSciNet  Google Scholar 

  39. Sun, D.: A projection and contraction method for the nonlinear complementarity problems and its extensions. Math. Numer. Sinica 16, 183–194 (1994)

    MathSciNet  MATH  Google Scholar 

  40. Tseng, P.: A modified forward-backward splitting method for maximal monotone mapping. SIAM J. Control Optim. 38, 431–446 (2000)

    Article  MathSciNet  Google Scholar 

  41. Yang, J., Liu, H.: A modified projected gradient method for monotone variational inequalities. J. Optim. Theory Appl. 179(1), 197–211 (2018)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgements

The research of Xiaokai Chang was supported by the Innovation Ability Improvement Project of Gansu (Grant No. 2020A022) and the Hongliu Foundation of First-class Disciplines of Lanzhou University of Technology, China. The research of Jianchao Bai was supported by the National Natural Science Foundation of China (Grant. No. 12001430) and the China Postdoctoral Science Foundation ((Grant. No. 2020M683545).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Xiaokai Chang or Jianchao Bai.

Additional information

Communicated by Regina S. Burachik.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The Details on Remark 3.4

The Details on Remark 3.4

For the case of \(\delta \in (\frac{\sqrt{5}-1}{2},1]\) presented in Remark 3.4, by Fact 2.4 with \(\varepsilon _1>0\), we have

$$\begin{aligned} 2\alpha \Vert y_n-y_{n-1}\Vert \Vert y_n-x_{n+1}\Vert\le & {} \alpha (\frac{1}{\varepsilon _1 }\Vert y_n-y_{n-1}\Vert ^2+\varepsilon _1\Vert x_{n+1}-y_n\Vert ^2). \end{aligned}$$

Meanwhile, for any \(\varepsilon _2>0\) we deduce

$$\begin{aligned} \Vert y_n-y_{n-1}\Vert ^2= & {} \Vert y_n-x_n\Vert ^2+\Vert x_n-y_{n-1}\Vert ^2+2\langle y_n-x_n,x_n-y_{n-1}\rangle \\\le & {} \Vert y_n-x_n\Vert ^2+\Vert x_n-y_{n-1}\Vert ^2+2\Vert y_n-x_n\Vert \Vert x_n-y_{n-1}\Vert \\\le & {} (1+\frac{1}{\varepsilon _2})\Vert y_n-x_n\Vert ^2+(1+\varepsilon _2) \Vert x_n-y_{n-1}\Vert ^2. \end{aligned}$$

Hence, Lemmas 3.3 and 3.4 can be improved as following lemmas.

Lemma A.1

Let \(\{x_n\}\), \(\{y_n\}\) be two sequences generated by Algorithm 3.1 and \({\bar{x}}\in {{\mathcal {S}}}\). Then, for any \(\varepsilon _1,\varepsilon _2>0\), we have

$$\begin{aligned}&\Vert x_{n+1}-{\bar{x}}\Vert ^2+2\lambda _n(1 +\delta ) \varPhi ({\bar{x}}, x_n) \\&\quad \le \Vert x_n-{\bar{x}}\Vert ^2+ 2\lambda _{n-1}(1 +\delta ) \varPhi ({\bar{x}}, x_{n-1})\\&\qquad +\left[ \frac{1}{\varepsilon _1}\left( 1+\frac{1}{\varepsilon _2}\right) \alpha -\frac{\lambda _n}{\delta \lambda _{n-1}} \right] \Vert x_n-y_n\Vert ^2\\&\qquad + \left( \frac{\lambda _n}{\delta \lambda _{n-1}}-1 \right) \Vert x_{n+1}-x_n\Vert ^2 \\&\qquad +\left( \varepsilon _1 \alpha -\frac{\lambda _n}{\delta \lambda _{n-1} }\right) \Vert x_{n+1}-y_n\Vert ^2+ \frac{1+\varepsilon _2}{\varepsilon _1}\alpha \Vert x_n-y_{n-1}\Vert ^2, \end{aligned}$$

where \(\varPhi (\cdot ,\cdot )\) is defined as in (7).

Lemma A.2

Let \(\{x_n\}\), \(\{y_n\}\) be two sequences generated by Algorithm 3.1 and \({\bar{x}}\in {{\mathcal {S}}}\). Then, for any \(\varepsilon _1,\varepsilon _2>0\), we have

$$\begin{aligned} a_{n+1}\le & {} a_n-b_n,~~n\ge 2, \end{aligned}$$
(36)

where

$$\begin{aligned} \left\{ \begin{array}{rcl} a_n&{}=&{}\Vert x_n-{\bar{x}}\Vert ^2+ 2\lambda _{n-1}(1 +\delta ) \varPhi ({\bar{x}}, x_{n-1})+\frac{1+\varepsilon _2}{\varepsilon _1}\alpha \Vert x_n-y_{n-1}\Vert ^2, \\ b_n&{}=&{}\left[ \frac{\lambda _n}{\delta \lambda _{n-1} }-\left( \varepsilon _1+\frac{1+\varepsilon _2}{\varepsilon _1}\right) \alpha \right] \Vert x_{n+1}-y_n\Vert ^2\\ &{}&{}+\left[ \frac{\lambda _n}{\delta \lambda _{n-1}}-\frac{1}{\varepsilon _1}\left( 1+\frac{1}{\varepsilon _2}\right) \alpha \right] \Vert x_n-y_n\Vert ^2 +\left( 1- \frac{\lambda _n}{\delta \lambda _{n-1}} \right) \Vert x_{n+1}-x_n\Vert ^2. \end{array}\right. \end{aligned}$$
(37)

For any \(\delta \in (\frac{\sqrt{5}-1}{2},1]\), we have \(\delta ^2+\delta -1>0\) and define a function \(\kappa (\delta )\) as

$$\begin{aligned} \kappa (\delta ):=\frac{1}{\delta }\max \limits _{\varepsilon _1>0,\varepsilon _2>0}\min \left\{ \frac{\varepsilon _1}{\varepsilon _1^2+\varepsilon _2+1}, ~\frac{(\delta ^2+\delta -1)\varepsilon _1\varepsilon _2}{\delta ^2(1+\varepsilon _2)} \right\} . \end{aligned}$$
(38)

Noting that the structure of (38) and \(\kappa (\delta )\) is a maximum value, so

$$\begin{aligned} \frac{\varepsilon _1}{\varepsilon _1^2+\varepsilon _2+1}= ~\frac{(\delta ^2+\delta -1)\varepsilon _1\varepsilon _2}{\delta ^2(1+\varepsilon _2)}, \end{aligned}$$

which together with \(a=\frac{\delta ^2}{\delta ^2+\delta -1}\) and \(\varepsilon _1>0\) yields

$$\begin{aligned} \varepsilon _1=\sqrt{\frac{(1+\varepsilon _2)(a-\varepsilon _2)}{\varepsilon _2}}, \end{aligned}$$

and then

$$\begin{aligned} \kappa (\delta )=\frac{1}{\delta }\max \limits _{\varepsilon _2>0} \frac{\sqrt{a\varepsilon _2+(a-1) \varepsilon _2^2-\varepsilon _2^3}}{a(1+\varepsilon _2)}. \end{aligned}$$
(39)

It follows from the first-order optimality condition of problem (39) that

$$\begin{aligned} \kappa (\delta )=\frac{\sqrt{a+1}}{\delta (a+1+\sqrt{a+1})}, \end{aligned}$$

when \(\varepsilon _1=\sqrt{a+1}\) and \(\varepsilon _2=\sqrt{a+1}-1\).

Consequently, we can obtain the following convergence result on Algorithm 3.1 with \(\delta \in (\frac{\sqrt{5}-1}{2},1]\) and \(\alpha \in (0,\kappa (\delta ))\).

Theorem A.1

Let \(\{x_n\}\) be the sequence generated by Algorithm 3.1 with \(\delta \in (\frac{\sqrt{5}-1}{2},1]\) and \(\alpha \in (0,\kappa (\delta ))\). Then, \(\{x_n\}\) converges weakly to a solution of problem (1).

Proof

Firstly, \(\delta \in (\frac{\sqrt{5}-1}{2},1]\) gives \(\frac{1}{\delta }\ge \frac{\delta ^2 +\delta -1}{\delta ^3}>0\). Note that \(\lim \limits _{n\rightarrow \infty }\frac{\lambda _n}{\lambda _{n-1}}=1\), by taking the limit, \(\alpha <\kappa (\delta )\) and the definition of \(\kappa (\alpha )\) in (30), we have

$$\begin{aligned} \left. \begin{array}{r} \lim \limits _{n\rightarrow \infty }\left[ \left( \varepsilon _1+\frac{1+\varepsilon _2}{\varepsilon _1} \right) \alpha -\frac{\lambda _n}{\delta \lambda _{n-1} }\right] =\left( \frac{\varepsilon _1^2+\varepsilon _2+1}{\varepsilon _1}\right) \alpha -\frac{1}{\delta }<0,\\ \lim \limits _{n\rightarrow \infty }\left[ \frac{1}{\varepsilon _1}\left( 1+\frac{1}{\varepsilon _2}\right) \alpha -\frac{\lambda _{n}}{\delta \lambda _{n-1} }\right] =\frac{1}{\varepsilon _1}\left( 1+\frac{1}{\varepsilon _2}\right) \alpha -\frac{1}{\delta }<0,\\ \lim \limits _{n\rightarrow \infty }\left[ \frac{1}{\varepsilon _1} \left( 1+\frac{1}{\varepsilon _2}\right) \alpha -\frac{\lambda _n}{\delta \lambda _{n-1} }+\frac{1}{\delta ^2}\left( \frac{\lambda _{n-1}}{\delta \lambda _{n-2}}-1 \right) \right] =\frac{1}{\varepsilon _1}\left( 1+\frac{1}{\varepsilon _2}\right) \alpha - \frac{\delta ^2 +\delta -1}{\delta ^3} <0, \end{array} \right. \end{aligned}$$

for any \(\delta \in (\frac{\sqrt{5}-1}{2},1]\). Thus, there exists an integer \(N>2,\) such that for any \(n>N\),

$$\begin{aligned} \left. \begin{array}{r} \left( \frac{\varepsilon _1^2+\varepsilon _2+1}{\varepsilon _1}\right) \alpha -\frac{\lambda _n}{\delta \lambda _{n-1} }<0,\\ \frac{1}{\varepsilon _1}\left( 1+\frac{1}{\varepsilon _2}\right) \alpha -\frac{\lambda _{n}}{\delta \lambda _{n-1} }<0,\\ \frac{1}{\varepsilon _1}\left( 1+\frac{1}{\varepsilon _2}\right) \alpha -\frac{\lambda _n}{\delta \lambda _{n-1} }+\frac{1}{\delta ^2}\left( \frac{1}{\delta }-1 \right) <0. \end{array} \right. \end{aligned}$$

Consequently, by the similar processing as in the proof of Theorem 3.1, the result can be proved. We omit the details. \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chang, X., Bai, J. A Projected Extrapolated Gradient Method with Larger Step Size for Monotone Variational Inequalities. J Optim Theory Appl 190, 602–627 (2021). https://doi.org/10.1007/s10957-021-01902-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-021-01902-2

Keywords

Mathematics Subject Classification

Navigation