Skip to main content
Log in

Zeroth-order single-loop algorithms for nonconvex-linear minimax problems

  • Published:
Journal of Global Optimization Aims and scope Submit manuscript

Abstract

Nonconvex minimax problems have attracted significant interest in machine learning and many other fields in recent years. In this paper, we propose a new zeroth-order alternating randomized gradient projection algorithm to solve smooth nonconvex-linear problems and its iteration complexity to find an \(\varepsilon \)-first-order Nash equilibrium is \({\mathcal {O}}\left( \varepsilon ^{-3} \right) \) and the number of function value estimation per iteration is bounded by \({\mathcal {O}}\left( d_{x}\varepsilon ^{-2} \right) \). Furthermore, we propose a zeroth-order alternating randomized proximal gradient algorithm for block-wise nonsmooth nonconvex-linear minimax problems and its corresponding iteration complexity is \({\mathcal {O}}\left( K^{\frac{3}{2}} \varepsilon ^{-3} \right) \) and the number of function value estimation is bounded by \({\mathcal {O}}\left( d_{x}\varepsilon ^{-2} \right) \) per iteration. The numerical results indicate the efficiency of the proposed algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3

Similar content being viewed by others

References

  1. Al-Dujaili, A., Srikant, S., Hemberg, E., O’Reilly, U.: On the application of Danskin’s theorem to derivative-free minimax optimization. AIP Conf. Proc. 2070, 020026 (2019)

    Article  Google Scholar 

  2. Berahas, A., Cao, L., Choromanski, K., Scheinberg, K.: A theoretical and empirical comparison of gradient approximations in derivative-free optimization. Found. Comput. Math. (2021). https://doi.org/10.1007/s10208-021-09513-z

    Article  MATH  Google Scholar 

  3. Bertsimas, D., Nohadani, O.: Robust optimization with simulated annealing. J. Glob. Optim. 48(2), 323–334 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  4. Beznosikov, A., Sadiev, A., Gasnikov, A.: Gradient-free methods with inexact oracle for convex–concave stochastic saddle-point problem. In: International Conference on Mathematical Optimization Theory and Operations Research, pp. 105–119. Springer (2020)

  5. Bot, R.I., Böhm, A.: Alternating proximal-gradient steps for (stochastic) nonconvex–concave minimax problems (2022). arXiv:2007.13605

  6. Chambolle, A., Pock, T.: On the ergodic convergence rates of a first-order primal–dual algorithm. Math. Program. 159(1–2), 253–287 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  7. Chang, C.C., Lin, C.J.: LIBSVM: a library for support vector machines. ACM Trans. Intell. Syst. Technol. 2(3), 1–27 (2011)

    Article  Google Scholar 

  8. Daskalakis, C., Ilyas, A., Syrgkanis, V., Zeng, H.: Training GANs with Optimism. ICLR (2018)

  9. Daskalakis, C., Panageas, I.: The limit points of (optimistic) gradient descent in min–max optimization. NeurIPS 31, 9236–9246 (2018)

    Google Scholar 

  10. Drezner, Z., Wesolowsky, G.O.: A maximin location problem with maximum distance constraints. AIIE Trans. 12(3), 249–252 (1980)

    Article  MathSciNet  Google Scholar 

  11. Foschini, G., Miljanic, Z.: A simple distributed autonomous power control algorithm and its convergence. IEEE Trans. Veh. Technol. 42, 641–646 (1993)

    Article  Google Scholar 

  12. Gao, X., Jiang, B., Zhang, S.: On the information-adaptive variants of the ADMM: an iteration complexity perspective. J. Sci. Comput. 76, 327–363 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  13. Gidel, G., Berard, H., Vignoud, G., Vincent, P., Lacoste-Julien, S.: A Variational Inequality Perspective on Generative Adversarial Networks. ICLR (2019)

  14. Hajinezhad, D., Hong, M.: Perturbed proximal primal–dual algorithm for nonconvex non-smooth optimization. Math. Program. 176, 207–245 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  15. Hettich, R.: A Newton-method for nonlinear Chebyshev approximation. In: Morel, J-M., Teissier, B. (eds.) Approximation Theory, pp. 222–236. Springer, Heidelberg (1976)

  16. Ho, J., Ermon, S.: Generative adversarial imitation learning. NeurIPS 29, 4565–4573 (2016)

    Google Scholar 

  17. Huang, F., Gao, S., Pei, J., Huang, H.: Accelerated zeroth-order momentum methods from mini to minimax optimization. (2020). arXiv:2008.08170

  18. Jin, C., Netrapalli, P., Jordan, M.I.: What is local optimality in nonconvex–nonconcave minimax optimization? PMLR 119, 4880–4889 (2020)

    Google Scholar 

  19. Johnson, M.E., Moore, L.M., Ylvisaker, D.: Minimax and maximin distance designs. J. Stat. Plan. Inference 26(2), 131–148 (1990)

    Article  MathSciNet  Google Scholar 

  20. Kong, W., Monteiro, R.: An accelerated inexact proximal point method for solving nonconvex-concave min-max problems. (2019). arXiv:1905.13433

  21. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  22. Letcher, A., Balduzzi, D., Racaniere, S., Martens, J., Foerster, J., Tuyls, K., Graepel, T.: Differentiable game mechanics. J. Mach. Learn. Res. 20(84), 1–40 (2019)

    MathSciNet  MATH  Google Scholar 

  23. Li, W., Chang, T., Chi, C.: Multicell coordinated beamforming with rate outage constraint part II: efficient approximation algorithms. IEEE Trans. Signal Process. 63, 2763–2778 (2015)

    Article  MathSciNet  MATH  Google Scholar 

  24. Liao, W., Hong, M., Farmanbar, H., Luo, Z.: Semi-asynchronous routing for large scale hierarchical networks. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2894–2898 (2015)

  25. Lin, T., Jin, C., Jordan, M.I.: On gradient descent ascent for nonconvex–concave minimax problems. PMLR 119, 6083–6093 (2020)

    Google Scholar 

  26. Lin, T., Jin, C., Jordan, M.I.: Near-optimal algorithms for minimax optimization. PMLR 125, 2738–2779 (2020)

    Google Scholar 

  27. Liu, S., Lu, S., Chen, X., Feng, Y., Xu, K., Dujaili, A., Hong, M., OReilly, U.: Min–max optimization without gradients: convergence and applications to black-box evasion and poisoning attacks. PMLR 119, 6282–6293 (2020)

    Google Scholar 

  28. Lu, S., Tsaknakis, I., Hong, M., Chen, Y.: Hybrid block successive approximation for one-sided non-convex min–max problems: algorithms and applications. IEEE Trans. Signal Process. 68, 3676–3691 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  29. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A: Towards deep learning models resistant to adversarial attacks. (2017). arXiv:1706.06083

  30. Menickelly, M., Wild, S.: Derivative-free robust optimization by outer approximations. Math. Program. 179, 157–193 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  31. Mohri, M., Sivek, G., Suresh, A.T.: Agnostic Federated Learning, pp. 4615–4625. PMLR (2019)

  32. Namkoong, H., Duchi, J.C.: Stochastic gradient methods for distributionally robust optimization with f-divergences. NeurIPS 29, 2208–2216 (2016)

  33. Nouiehed, M., Sanjabi, M., Huang, T., Lee, J., Razaviyayn, M.: Solving a class of non-convex min–max games using iterative first order methods. NeurIPS 32, 14934–14942 (2019)

    Google Scholar 

  34. Ostrovskii, D., Lowy, A., Razaviyayn, M.: Efficient search of first-order Nash equilibria in nonconvex-concave smooth min–max problems. (2020). arXiv:2002.07919

  35. Pan, W., Shen, J., Xu, Z.: An efficient algorithm for nonconvex-linear minimax optimization problem and its application in solving weighted maximin dispersion problem. Comput. Optim. Appl. 78(1), 287–306 (2021)

    Article  MathSciNet  MATH  Google Scholar 

  36. Picheny, V., Binois, M., Habbal, A.: A Bayesian optimization approach to find Nash equilibria. J. Glob. Optim. 73(1), 171–192 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  37. Qian, Q., Zhu, S., Tang, J., Jin, R., Sun, B., Li, H.: Robust optimization over multiple domains. Proc. AAAI Conf. Artif. Intell. 33, 4739–4746 (2019)

    Google Scholar 

  38. Rafique, H., Liu, M., Lin, Q., Yang, T.: Weakly-convex min–max optimization: provable algorithms and applications in machine learning. Optim. Methods Softw. 1–35 (2021)

  39. Roy, A., Chen, Y., Balasubramanian, K., Mohapatra, P.: Online and bandit algorithms for nonstationary stochastic saddle-point optimization. (2019). arXiv:1912.01698

  40. Sadiev, A., Beznosikov, A., Dvurechensky, P., Gasnikov, A.: Zeroth-order algorithms for smooth saddle-point problems (2020). arXiv:2009.09908

  41. Schaback, R.: Multivariate interpolation and approximation by translates of a basis function. Ser. Approx. Decompos. 6, 491–514 (1995)

    MathSciNet  MATH  Google Scholar 

  42. Thekumparampil, K., Jain, P., Netrapalli, P., Oh, S.: Efficient algorithms for smooth minimax optimization. NeurIPS 32, 12680–12691 (2019)

    Google Scholar 

  43. Wang, Z., Balasubramanian, K., Ma, S., Razaviyayn, M.: Zeroth-order algorithms for nonconvex minimax problems with improved complexities. (2020). arXiv:2001.07819

  44. White, D.J.: A heuristic approach to a weighted maxmin dispersion problem. IMA J. Manag. Math. 7(3), 219–231 (1996)

    MathSciNet  MATH  Google Scholar 

  45. Wu, Z., Jiang, B., Liu, Y., Dai Y.: Penalty approach for multiuser one-bit massive MIMO downlink with PSK signaling. IEEE ICASSP. (2022). arXiv:2110.04768

  46. Xu, T., Wang, Z., Liang, Y., Poor, H.: Gradient free minimax optimization: variance reduction and faster convergence. (2020). arXiv:2006.09361

  47. Xu, Z., Shen, J., Wang, Z., Dai, Y.: Zeroth-order alternating randomized gradient projection algorithms for general nonconvex–concave minimax problems. (2021). arXiv:2108.00473

  48. Xu, Z., Zhang, H., Xu, Y., Lan, G.: A unified single-loop alternating gradient projection algorithm for nonconvex-concave and convex–nonconcave minimax problems. (2020). arXiv:2006.02032

  49. Yang, J., Zhang, S., Kiyavash, N., He, N.: A catalyst framework for minimax optimization. NeurIPS 33, 5667–5678 (2020)

    Google Scholar 

  50. Zhang, J., Xiao, P., Sun, R., Luo, Z.: A single-loop smoothed gradient descent–ascent algorithm for nonconvex–concave min–max problems. NeurIPS 33, 7377–7389 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zi Xu.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This work is supported by National Natural Science Foundation of China under the Grant 12071279 and by General Project of Shanghai Natural Science Foundation (No. 20ZR1420600)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Shen, J., Wang, Z. & Xu, Z. Zeroth-order single-loop algorithms for nonconvex-linear minimax problems. J Glob Optim 87, 551–580 (2023). https://doi.org/10.1007/s10898-022-01169-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10898-022-01169-5

Keywords

Mathematics Subject Classification

Navigation