Skip to main content

Advertisement

Log in

Golden ratio algorithms for variational inequalities

  • Full Length Paper
  • Series A
  • Published:
Mathematical Programming Submit manuscript

Abstract

The paper presents a fully adaptive algorithm for monotone variational inequalities. In each iteration the method uses two previous iterates for an approximation of the local Lipschitz constant without running a linesearch. Thus, every iteration of the method requires only one evaluation of a monotone operator F and a proximal mapping g. The operator F need not be Lipschitz continuous, which also makes the algorithm interesting in the area of composite minimization. The method exhibits an ergodic O(1 / k) convergence rate and R-linear rate under an error bound condition. We discuss possible applications of the method to fixed point problems as well as its different generalizations.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. We assume that \(F(z^1)\ne F(z^0)\), otherwise choose another \(z^0\).

  2. All codes can be found on https://gitlab.gwdg.de/malitskyi/graal.git.

References

  1. Alvarez, F., Attouch, H.: An inertial proximal method for maximal monotone operators via discretization of a nonlinear oscillator with damping. Set-Valued Anal. 9(1–2), 3–11 (2001)

    MathSciNet  MATH  Google Scholar 

  2. Antipin, A.S.: Minimization of convex functions on convex sets by means of differential equations. Differ. Equ. 30(9), 1365–1375 (1994)

    MathSciNet  MATH  Google Scholar 

  3. Arrow, K.J., Hurwicz, L., Uzawa, H.: Studies in Linear and Non-linear Programming. Stanford University Press, Redwood City (1958)

    MATH  Google Scholar 

  4. Attouch, H., Chbani, Z., Peypouquet, J., Redont, P.: Fast convergence of inertial dynamics and algorithms with asymptotic vanishing viscosity. Math. Program. 168(1–2), 123–175 (2018)

    MathSciNet  MATH  Google Scholar 

  5. Attouch, H., Cominetti, R.: A dynamical approach to convex minimization coupling approximation with the steepest descent method. J. Differ. Equ. 128(2), 519–540 (1996)

    MathSciNet  MATH  Google Scholar 

  6. Baes, M., Bürgisser, M., Nemirovski, A.: A randomized mirror-prox method for solving structured large-scale matrix saddle-point problems. SIAM J. Optim. 23(2), 934–962 (2013)

    MathSciNet  MATH  Google Scholar 

  7. Banert, S., Boţ, R.I.: A forward-backward-forward differential equation and its asymptotic properties. J. Convex Anal. 25(2), 371–388 (2018)

    MathSciNet  MATH  Google Scholar 

  8. Bauschke, H.H., Bolte, J., Teboulle, M.: A descent lemma beyond Lipschitz gradient continuity: first-order methods revisited and applications. Math. Oper. Res. 42(2), 330–348 (2016)

    MathSciNet  MATH  Google Scholar 

  9. Bauschke, H.H., Borwein, J.M.: On projection algorithms for solving convex feasibility problems. SIAM Rev. 38(3), 367–426 (1996)

    MathSciNet  MATH  Google Scholar 

  10. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces. Springer, New York (2011)

    MATH  Google Scholar 

  11. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problem. SIAM J. Imaging Sci. 2(1), 183–202 (2009)

    MathSciNet  MATH  Google Scholar 

  12. Bello Cruz, J., Díaz Millán, R.: A variant of forward-backward splitting method for the sum of two monotone operators with a new search strategy. Optimization 64(7), 1471–1486 (2015)

    MathSciNet  MATH  Google Scholar 

  13. Berinde, V.: Iterative Approximation of Fixed Points, vol. 1912. Springer, Berlin (2007)

    MATH  Google Scholar 

  14. Boţ, R.I., Csetnek, E.R.: An inertial forward-backward-forward primal-dual splitting algorithm for solving monotone inclusion problems. Numer. Algorithms 71(3), 519–540 (2016)

    MathSciNet  MATH  Google Scholar 

  15. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optitm. Theory Appl. 148, 318–335 (2011)

    MathSciNet  MATH  Google Scholar 

  16. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40(1), 120–145 (2011)

    MathSciNet  MATH  Google Scholar 

  17. Chambolle, A., Pock, T.: On the ergodic convergence rates of a first-order primal-dual algorithm. Math. Program. 159(1–2), 253–287 (2016)

    MathSciNet  MATH  Google Scholar 

  18. Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2(3), 27 (2011)

    Google Scholar 

  19. Chen, G.H., Rockafellar, R.T.: Convergence rates in forward-backward splitting. SIAM J. Optim. 7(2), 421–444 (1997)

    MathSciNet  MATH  Google Scholar 

  20. Combettes, P.: The convex feasibility problem in image recovery. Adv. Imaging Electron Phys. 95, 155–270 (1996)

    Google Scholar 

  21. Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, Volume I and Volume II. Springer, New York (2003)

    MATH  Google Scholar 

  22. Giselsson, P., Fält, M., Boyd, S.: Line search for averaged operator iteration. In: 2016 IEEE 55th Conference on Decision and Control (CDC), pp. 1015–1022. IEEE (2016)

  23. Harker, P.T.: A variational inequality approach for the determination of oligopolistic market equilibrium. Math. Program. 30(1), 105–111 (1984)

    MathSciNet  MATH  Google Scholar 

  24. He, B.: A new method for a class of linear variational inequalities. Math. Program. 66(1–3), 137–144 (1994)

    MathSciNet  MATH  Google Scholar 

  25. Ishikawa, S.: Fixed points by a new iteration method. Proc. Am. Math. Soc. 44(1), 147–150 (1974)

    MathSciNet  MATH  Google Scholar 

  26. Iusem, A.N., Svaiter, B.F.: A variant of Korpelevich’s method for variational inequalities with a new search strategy. Optimization 42, 309–321 (1997)

    MathSciNet  MATH  Google Scholar 

  27. Juditsky, A., Nemirovski, A., Tauvel, C.: Solving variational inequalities with stochastic mirror-prox algorithm. Stoch. Syst. 1(1), 17–58 (2011)

    MathSciNet  MATH  Google Scholar 

  28. Khobotov, E.N.: Modification of the extragradient method for solving variational inequalities and certain optimization problems. USSR Comput. Math. Math. Phys. 27, 120–127 (1989)

    MATH  Google Scholar 

  29. Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications, vol. 31. SIAM, University City (1980)

    MATH  Google Scholar 

  30. Konnov, I.: A class of combined iterative methods for solving variational inequalities. J. Optim. Theory Appl. 94(3), 677–693 (1997)

    MathSciNet  MATH  Google Scholar 

  31. Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Ekonomika i Matematicheskie Metody 12(4), 747–756 (1976)

    MathSciNet  MATH  Google Scholar 

  32. Lorenz, D., Pock, T.: An inertial forward-backward algorithm for monotone inclusions. J. Math. Imaging Vis. 51(2), 311–325 (2015)

    MathSciNet  MATH  Google Scholar 

  33. Lu, H., Freund, R.M., Nesterov, Y.: Relatively smooth convex optimization by first-order methods, and applications. SIAM J. Optim. 28(1), 333–354 (2018)

    MathSciNet  MATH  Google Scholar 

  34. Luke, R.D., Thao, N.H., Tam, M.K.: Quantitative convergence analysis of iterated expansive, set-valued mappings. Math. Oper. Res. 43(4), 1143–1176 (2018)

    MathSciNet  MATH  Google Scholar 

  35. Lyashko, S.I., Semenov, V.V., Voitova, T.A.: Low-cost modification of Korpelevich’s method for monotone equilibrium problems. Cybernet. Syst. Anal. 47, 631–639 (2011)

    MathSciNet  MATH  Google Scholar 

  36. Malitsky, Y.: Reflected projected gradient method for solving monotone variational inequalities. SIAM J. Optim. 25(1), 502–520 (2015)

    MathSciNet  MATH  Google Scholar 

  37. Malitsky, Y.: Proximal extrapolated gradient methods for variational inequalities. Optim. Methods Softw. 33(1), 140–164 (2018)

    MathSciNet  MATH  Google Scholar 

  38. Malitsky, Y.V., Semenov, V.V.: An extragradient algorithm for monotone variational inequalities. Cybernet. Syst. Anal. 50(2), 271–277 (2014)

    MathSciNet  MATH  Google Scholar 

  39. Monteiro, R.D., Svaiter, B.F.: Complexity of variants of Tseng’s modified FB splitting and Korpelevich’s methods for hemivariational inequalities with applications to saddle-point and convex optimization problems. SIAM J. Optim. 21(4), 1688–1720 (2011)

    MathSciNet  MATH  Google Scholar 

  40. Moudafi, A., Oliny, M.: Convergence of a splitting inertial proximal method for monotone operators. J. Comput. Appl. Math. 155(2), 447–454 (2003)

    MathSciNet  MATH  Google Scholar 

  41. Murphy, F.H., Sherali, H.D., Soyster, A.L.: A mathematical programming approach for determining oligopolistic market equilibrium. Math. Program. 24(1), 92–106 (1982)

    MathSciNet  MATH  Google Scholar 

  42. Naimpally, S., Singh, K.: Extensions of some fixed point theorems of Rhoades. J. Math. Anal. Appl. 96(2), 437–446 (1983)

    MathSciNet  MATH  Google Scholar 

  43. Nemirovski, A.: Prox-method with rate of convergence \({O}(1/t)\) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15(1), 229–251 (2004)

    MathSciNet  MATH  Google Scholar 

  44. Nemirovsky, A.: Information-based complexity of linear operator equations. J. Complex. 8(2), 153–175 (1992)

    MathSciNet  Google Scholar 

  45. Nesterov, Y.: Dual extrapolation and its applications to solving variational inequalities and related problems. Math. Program. 109(2–3), 319–344 (2007)

    MathSciNet  MATH  Google Scholar 

  46. Pang, J.S.: Error bounds in mathematical programming. Math. Program. 79(1–3), 299–332 (1997)

    MathSciNet  MATH  Google Scholar 

  47. Polyak, B.: Some methods of speeding up the convergence of iteration methods. U.S.S.R. Comput. Math. Math. Phys. 4(5), 1–17 (1967)

    Google Scholar 

  48. Popov, L.D.: A modification of the Arrow–Hurwicz method for finding saddle points. Math. Notes 28(5), 845–848 (1980)

    MATH  Google Scholar 

  49. Qihou, L.: On Naimpally and Singh’s open questions. J. Math. Anal. Appl. 124(1), 157–164 (1987)

    MathSciNet  MATH  Google Scholar 

  50. Solodov, M.V.: Convergence rate analysis of iteractive algorithms for solving variational inequality problems. Math. Program. 96(3), 513–528 (2003)

    MathSciNet  MATH  Google Scholar 

  51. Solodov, M.V., Svaiter, B.F.: A new projection method for variational inequality problems. SIAM J. Control Optim. 37(3), 765–776 (1999)

    MathSciNet  MATH  Google Scholar 

  52. Solodov, M.V., Tseng, P.: Modified projection-type methods for monotone variational inequalities. SIAM J. Control Optim. 34(5), 1814–1830 (1996)

    MathSciNet  MATH  Google Scholar 

  53. Su, W., Boyd, S., Candes, E.: A differential equation for modeling Nesterov’s accelerated gradient method: Theory and insights. In: Ghahramani, Z., Welling, M., Cortes, C., Lawrence, C.D., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, pp. 2510–2518. Curran Associates, Inc. (2014). http://papers.nips.cc/paper/5322-a-differential-equation-for-modeling-nesterovs-accelerated-gradient-method-theory-and-insights.pdf

  54. Themelis, A., Patrinos, P.: Supermann: a superlinearly convergent algorithm for finding fixed points of nonexpansive operators. IEEE Trans. Autom. Control (2019). https://ieeexplore.ieee.org/document/8675506

  55. Tran-Dinh, Q., Kyrillidis, A., Cevher, V.: Composite self-concordant minimization. J. Mach. Learn. Res. 16(1), 371–416 (2015)

    MathSciNet  MATH  Google Scholar 

  56. Tseng, P.: On linear convergence of iterative methods for the variational inequality problem. J. Comput. Appl. Math. 60(1–2), 237–252 (1995)

    MathSciNet  MATH  Google Scholar 

  57. Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38, 431–446 (2000)

    MathSciNet  MATH  Google Scholar 

  58. Yang, J., Liu, H.: A modified projected gradient method for monotone variational inequalities. J. Optim. Theory Appl. 179(1), 197–211 (2018)

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The author would like to thank Anna–Lena Martins, Panayotis Mertikopoulos, Matthew Tam, associate editor and anonymous referees for their useful comments that have significantly improved the quality of the paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yura Malitsky.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This research was supported by the German Research Foundation Grant SFB755-A4.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Malitsky, Y. Golden ratio algorithms for variational inequalities. Math. Program. 184, 383–410 (2020). https://doi.org/10.1007/s10107-019-01416-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10107-019-01416-w

Keywords

Mathematics Subject Classification

Navigation