Skip to main content
Log in

Extragradient-type methods with \(\mathcal {O}\left( 1/k\right) \) last-iterate convergence rates for co-hypomonotone inclusions

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

We develop two “Nesterov’s accelerated” variants of the well-known extragradient method to approximate a solution of a co-hypomonotone inclusion constituted by the sum of two operators, where one is Lipschitz continuous and the other is possibly multivalued. The first scheme can be viewed as an accelerated variant of Tseng’s forward-backward-forward splitting (FBFS) method, while the second one is a Nesterov’s accelerated variant of the “past” FBFS scheme, which requires only one evaluation of the Lipschitz operator and one resolvent of the multivalued mapping. Under appropriate conditions on the parameters, we theoretically prove that both algorithms achieve \(\mathcal {O}\left( 1/k\right) \) last-iterate convergence rates on the residual norm, where k is the iteration counter. Our results can be viewed as alternatives of a recent class of Halpern-type methods for root-finding problems. For comparison, we also provide a new convergence analysis of the two recent extra-anchored gradient-type methods for solving co-hypomonotone inclusions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Data availability

The author confirms that this paper does not contain any data.

References

  1. Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein generative adversarial networks. In: International Conference on Machine Learning, pp. 214–223 (2017)

  2. Attouch, H., Cabot, A.: Convergence of a relaxed inertial proximal algorithm for maximally monotone operators. Math. Program. 184(1), 243–287 (2020)

    MathSciNet  Google Scholar 

  3. Attouch, H., Peypouquet, J.: Convergence of inertial dynamics and proximal algorithms governed by maximally monotone operators. Math. Program. 174(1–2), 391–432 (2019)

    MathSciNet  Google Scholar 

  4. Azar, M.G., Osband, I., Munos, R.: Minimax regret bounds for reinforcement learning. In: International Conference on Machine Learning, pp. 263–272. PMLR (2017)

  5. Bauschke, H.H., Combettes, P.: Convex Analysis and Monotone Operators Theory in Hilbert Spaces, 2nd edn. Springer-Verlag, Cham (2017)

    Google Scholar 

  6. Bauschke, H.H., Moursi, W.M., Wang, X.: Generalized monotone operators and their averaged resolvents. Math. Program. 189, 55–74 (2020)

    MathSciNet  Google Scholar 

  7. Bhatia, K., Sridharan, K.: Online learning with dynamics: a minimax perspective. Adv. Neural. Inf. Process. Syst. 33, 15020–15030 (2020)

    Google Scholar 

  8. Böhm, A.: Solving nonconvex-nonconcave min-max problems exhibiting weak Minty solutions. Trans. Mach. Learn. Res. 2835–8856 (2022)

  9. Bot, R.I., Csetnek, E.R., Nguyen, D.K.: Fast Optimistic Gradient Descent Ascent (OGDA) method in continuous and discrete time. Found. Comput. Math. 2023. https://doi.org/10.1007/s10208-023-09636-5

  10. Bot, R.I., Nguyen, D.K.: Fast Krasnoselśkii-Mann algorithm with a convergence rate of the xed point iteration of \(o(1/k)\). SIAM J. Numer. Anal. 61(6), 2813–2843

  11. Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223–311 (2018)

    MathSciNet  Google Scholar 

  12. Burachik, R.S., Iusem, A.: Set-Valued Mappings and Enlargements of Monotone Operators. Springer, New York (2008)

    Google Scholar 

  13. Cai, Y., Oikonomou, A., Zheng, W.: Accelerated algorithms for monotone inclusions and constrained nonconvex-nonconcave min-max optimization. In: OPT2022: 14th Annual Workshop on Op-timization for Machine Learning, pp. 1–27 (2022)

  14. Cai, Y., Zheng, W.: Accelerated single-call methods for constrained min-max optimization. In: The Eleventh International Conference on Learning Representations. pp. 1–28 (2022)

  15. Censor, Y., Gibali, A., Reich, S.: The subgradient extragradient method for solving variational inequalities in Hilbert space. J. Optim. Theory Appl. 148(2), 318–335 (2011)

    MathSciNet  Google Scholar 

  16. Cevher, V., Vũ, B.C.: A reflected forward-backward splitting method for monotone inclusions involving Lipschitzian operators. Set-Valued Var. Anal. 29(1), 163–174 (2021)

    MathSciNet  Google Scholar 

  17. Chen, Y., Lan, G., Ouyang, Y.: Accelerated schemes for a class of variational inequalities. Math. Program. 165(1), 113–149 (2017)

    MathSciNet  Google Scholar 

  18. Cibulka, R., Dontchev, A.L., Kruger, A.Y.: Strong metric subregularity of mappings in variational analysis and optimization. J. Math. Anal. Appl. 457(2), 1247–1282 (2018)

    MathSciNet  Google Scholar 

  19. Combettes, P.L., Pennanen, T.: Proximal methods for cohypomonotone operators. SIAM J. Control. Optim. 43(2), 731–742 (2004)

    MathSciNet  Google Scholar 

  20. Combettes, P.L., Wajs, V.R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005)

    MathSciNet  Google Scholar 

  21. Cong, D.D., Lan, G.: On the convergence properties of non-Euclidean extragradient methods for variational inequalities with generalized monotone operators. Comput. Optim. Appl. 60(2), 277–310 (2015)

    MathSciNet  Google Scholar 

  22. Daskalakis, C., Panageas, I.: The limit points of (optimistic) gradient descent in min-max optimization. Adv. Neural. Inf. Process. Syst. 31, 1–11 (2018)

  23. Davis, D., Yin, W.: A three-operator splitting scheme and its optimization applications. Set-Valued Var. Anal. 25(4), 829–858 (2017)

    MathSciNet  Google Scholar 

  24. Diakonikolas, J.: Halpern iteration for near-optimal and parameter-free monotone inclusion and strong solutions to variational inequalities. In: Conference on Learning Theory, pp. 1428–1451. PMLR (2020)

  25. Diakonikolas, J., Daskalakis, C., Jordan, M.: Efficient methods for structured nonconvex-nonconcave min-max optimization. In: International Conference on Artificial Intelligence and Statistics, pp. 2746–2754. PMLR (2021)

  26. Evens, B., Pas, P., Latafat, P., Patrinos, P.: Convergence of the preconditioned proximal point method and Douglas–Rachford splitting in the absence of monotonicity. arXiv preprint arXiv:2305.03605 (2023)

  27. Facchinei, F., Pang, J.-S.: Finite-Dimensional Variational Inequalities and Complementarity Problems, vol. 1-2. Springer-Verlag, Cham (2003)

  28. Gidel, G., Berard, H., Vignoud, G., Vincent, P., Lacoste-Julien, S.: A variational inequality perspective on generative adversarial networks. Int. Conf. Learn. Representations, 1–38 (2018)

  29. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. Adv. Neural. Inf. Process. Syst. 27(1–9), 2672–2680 (2014)

  30. Gorbunov, E., Loizou, N., Gidel, G.: Extragradient method: \({\cal{O}} (1/k)\) last-iterate convergence for monotone variational inequalities and connections with cocoercivity. In: International Conference on Artificial Intelligence and Statistics, pp. 366–402. PMLR (2022)

  31. Gorbunov, E., Taylor, A., Gidel, G.: Last-iterate convergence of optimistic gradient method for monotone variational inequalities. Adv. Neural Inf. Process. Syst. 35, 21858–21870 (2022)

  32. Grimmer, B., Lu, H., Worah, P., Mirrokni, V.: The landscape of the proximal point method for nonconvex-nonconcave minimax optimization. Math. Program. 201(1–2), 373–407 (2023)

    MathSciNet  Google Scholar 

  33. Halpern, B.: Fixed points of nonexpanding maps. Bull. Am. Math. Soc. 73(6), 957–961 (1967)

    MathSciNet  Google Scholar 

  34. Harker, P.T., Pang, J.-S.: Finite-dimensional variational inequality and nonlinear complementarity problems: a survey of theory, algorithms and applications. Math. Program. 48(1), 161–220 (1990)

    MathSciNet  Google Scholar 

  35. Jabbar, Abdul, Li, Xi., Omar, Bourahla: A survey on generative adversarial networks: variants, applications, and training. ACM Comput. Surv. (CSUR) 54(8), 1–49 (2021)

    Google Scholar 

  36. Kim, D.: Accelerated proximal point method for maximally monotone operators. Math. Program. 190(1–2), 57–87 (2021)

    MathSciNet  Google Scholar 

  37. Konnov, I.V.: Combined Relaxation Methods for Variational Inequalities. Springer-Verlag, Cham (2001)

    Google Scholar 

  38. Korpelevich, G.M.: The extragradient method for finding saddle points and other problems. Matecon 12, 747–756 (1976)

    MathSciNet  Google Scholar 

  39. Lan, G.: First-Order and Stochastic Optimization Methods for Machine Learning. Springer, Cham (2020)

    Google Scholar 

  40. Lee, S., Kim, D.: Fast extra gradient methods for smooth structured nonconvex-nonconcave minimax problems. In: Thirty-Fifth Conference on Neural Information Processing Systems (NeurIPs2021) (2021)

  41. Levy, D., Carmon, Y., Duchi, J.C., Sidford, A.: Large-scale methods for distributionally robust optimization. Adv. Neural. Inf. Process. Syst. 33, 8847–8860 (2020)

    Google Scholar 

  42. Lieder, F.: On the convergence rate of the Halpern-iteration. Optim. Lett. 15(2), 405–418 (2021)

    MathSciNet  Google Scholar 

  43. Lin, F., Fang, X., Gao, Z.: Distributionally robust optimization: a review on theory and applications. Numer. Algebra Control Optim. 12(1), 159 (2022)

    MathSciNet  Google Scholar 

  44. Lin, Q., Liu, M., Rafique, H., Yang, T.: Solving weakly-convex-weakly-concave saddle-point problems as weakly-monotone variational inequality. arXiv preprint arXiv:1810.10207 (2018)

  45. Lions, P.L., Mercier, B.: Splitting algorithms for the sum of two nonlinear operators. SIAM J. Numer. Anal. 16, 964–979 (1979)

    MathSciNet  Google Scholar 

  46. Luo, Y., Tran-Dinh, Q.: Extragradient-type methods for co-monotone root-finding problems. (Manuscript) (2022)

  47. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: International Conference on Learning Representations (2018)

  48. Maingé, P.-E.: Accelerated proximal algorithms with a correction term for monotone inclusions. Appl. Math. Optim. 84(2), 2027–2061 (2021)

    MathSciNet  Google Scholar 

  49. Maingé, P.E.: Fast convergence of generalized forward-backward algorithms for structured monotone inclusions. J. Convex Anal. 29, 893–920 (2022)

    MathSciNet  Google Scholar 

  50. Malitsky, Y.: Projected reflected gradient methods for monotone variational inequalities. SIAM J. Optim. 25(1), 502–520 (2015)

    MathSciNet  Google Scholar 

  51. Malitsky, Y.: Golden ratio algorithms for variational inequalities. Math. Program. 184(1–2), 383–410 (2019)

    MathSciNet  Google Scholar 

  52. Malitsky, Y., Tam, M.K.: A forward-backward splitting method for monotone inclusions without cocoercivity. SIAM J. Optim. 30(2), 1451–1472 (2020)

    MathSciNet  Google Scholar 

  53. Malitsky, Y.V., Semenov, V.V.: An extragradient algorithm for monotone variational inequalities. Cybern. Syst. Anal. 50(2), 271–277 (2014)

    MathSciNet  Google Scholar 

  54. Marcotte, P.: Application of Khobotov’s algorithm to variational inequalities and network equilibrium problems. INFOR Inf. Syst. Oper. Res. 29(4), 258–270 (1991)

    MathSciNet  Google Scholar 

  55. Monteiro, R.D.C., Svaiter, B.F.: Complexity of variants of Tseng’s modified F-B splitting and Korpelevich’s methods for hemivariational inequalities with applications to saddle-point and convex optimization problems. SIAM J. Optim. 21(4), 1688–1720 (2011)

    MathSciNet  Google Scholar 

  56. Nemirovskii, A.: Prox-method with rate of convergence \({\cal{O} }(1/t)\) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM J. Optim. 15(1), 229–251 (2004)

    MathSciNet  Google Scholar 

  57. Nesterov, Y.: A method for unconstrained convex minimization problem with the rate of convergence \({\cal{O} }(1/k^2)\). Dokl. Akad. Nauk. SSSR 269, 543–547 (1983)

    MathSciNet  Google Scholar 

  58. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course, Volume 87 of Applied Optimization. Kluwer Academic Publishers, Amsterdam (2004)

    Google Scholar 

  59. Nesterov, Y.: Dual extrapolation and its applications to solving variational inequalities and related problems. Math. Program. 109(2–3), 319–344 (2007)

    MathSciNet  Google Scholar 

  60. Otero, R.G., Iusem, A.: Regularity results for semimonotone operators. Comput. Appl. Math. 30, 5–17 (2011)

    MathSciNet  Google Scholar 

  61. Park, J., Ryu, E.K.: Exact optimal accelerated complexity for fixed-point iterations. In: International Conference on Machine Learning (PMLR), 17420–17457 (2022)

  62. Pennanen, T.: Local convergence of the proximal point algorithm and multiplier methods without monotonicity. Math. Oper. Res. 27(1), 170–191 (2002)

    MathSciNet  Google Scholar 

  63. Pethick, T., Patrinos, P., Fercoq, O., Cevher, V.: Escaping limit cycles: global convergence for constrained nonconvex-nonconcave minimax problems. In: International Conference on Learning Representations (2022)

  64. Phelps, R.R.: Convex Functions, Monotone Operators and Differentiability, vol. 1364. Springer, Cham (2009)

    Google Scholar 

  65. Popov, L.D.: A modification of the Arrow–Hurwicz method for search of saddle points. Math. Notes Acad. Sci. USSR 28(5), 845–848 (1980)

    Google Scholar 

  66. Rahimian, H., Mehrotra, S.: Distributionally robust optimization: a review. arXiv preprint arXiv:1908.05659 (2019)

  67. Rockafellar, R., Wets, R.: Variational analysis, vol. 317. Springer, Cham (2004)

    Google Scholar 

  68. Rockafellar, R.T.: Monotone operators and the proximal point algorithm. SIAM J. Control Optim. 14, 877–898 (1976)

    MathSciNet  Google Scholar 

  69. Ryu, E.K., Boyd, S.: Primer on monotone operator methods. Appl. Comput. Math. 15(1), 3–43 (2016)

    MathSciNet  Google Scholar 

  70. Sabach, S., Shtern, S.: A first order method for solving convex bilevel optimization problems. SIAM J. Optim. 27(2), 640–660 (2017)

    MathSciNet  Google Scholar 

  71. Solodov, M.V., Svaiter, B.F.: A new projection method for variational inequality problems. SIAM J. Control Optim. 37(3), 765–776 (1999)

    MathSciNet  Google Scholar 

  72. Sra, S., Nowozin, S., Wright, S.J.: Optimization for Machine Learning. MIT Press, Cambridge (2012)

    Google Scholar 

  73. Tran-Dinh, Q.: From Halpern’s fixed-point iterations to Nesterov’s accelerated interpretations for root-finding problems. Comput. Optim. Appl. (2023). https://doi.org/10.1007/s10589-023-00518-8

    Article  Google Scholar 

  74. Tran-Dinh, Q.: Sublinear convergence rates of extragradient-type methods: a survey on classical and recent developments. arXiv preprint arXiv:2303.17192 (2023)

  75. Tran-Dinh, Q., Luo, Y.: Halpern-type accelerated and splitting algorithms for monotone inclusions. arXiv preprint arXiv:2110.08150 (2021)

  76. Tran-Dinh, Q., Luo, Y.: Randomized block-coordinate optimistic gradient algorithms for root-finding problems. arXiv preprint arXiv:2301.03113 (2023)

  77. Tseng, P.: A modified forward-backward splitting method for maximal monotone mappings. SIAM J. Control Optim. 38(2), 431–446 (2000)

    MathSciNet  Google Scholar 

  78. Yang, J., Kiyavash, N., He, N.: Global convergence and variance-reduced optimization for a class of nonconvex-nonconcave minimax problems. arXiv preprint arXiv:2002.09621 (2020)

  79. Yoon, T., Ryu, E.K.: Accelerated algorithms for smooth convex-concave minimax problems with \({\cal{O}}(1/k^2)\) rate on squared gradient norm. In: International Conference on Machine Learning, pp. 12098–12109. PMLR (2021)

Download references

Acknowledgements

The author is grateful to the anonymous reviewers for their helpful comments and suggestions. This paper is based upon work partially supported by the National Science Foundation (NSF), Grant No. NSF-RTG DMS-2134107 and the Office of Naval Research (ONR), Grant No. N00014-20-1-2088 and No. N00014-23-1-2588.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Quoc Tran-Dinh.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tran-Dinh, Q. Extragradient-type methods with \(\mathcal {O}\left( 1/k\right) \) last-iterate convergence rates for co-hypomonotone inclusions. J Glob Optim 89, 197–221 (2024). https://doi.org/10.1007/s10898-023-01347-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-023-01347-z

Keywords

Mathematics Subject Classification

Navigation