Abstract
This paper is devoted to solving a convex stochastic optimization problem in a overparameterization setup for the case where the original gradient computation is not available, but an objective function value can be computed. For this class of problems we provide a novel gradient-free algorithm, whose creation approach is based on applying a gradient approximation with \(l_2\) randomization instead of a gradient oracle in the biased Accelerated SGD algorithm, which generalizes the convergence results of the AC-SA algorithm to the case where the gradient oracle returns a noisy (inexact) objective function value. We also perform a detailed analysis to find the maximum admissible level of adversarial noise at which we can guarantee to achieve the desired accuracy. We verify the theoretical results of convergence using a model example.
The research was supported by Russian Science Foundation (project No. 21-71- 30005), https://rscf.ru/en/project/21-71-30005/.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The full version of this article, which includes the Appendix can be found by the article title in the arXiv at the following link: https://arxiv.org/abs/2307.12725.
References
Ajalloeian, A., Stich, S.U.: On the convergence of sgd with biased gradients. arXiv preprint arXiv:2008.00051 (2020)
Akhavan, A., Chzhen, E., Pontil, M., Tsybakov, A.: A gradient estimator via l1-randomization for online zero-order optimization with two point feedback. Adv. Neural. Inf. Process. Syst. 35, 7685–7696 (2022)
Allen-Zhu, Z., Li, Y., Liang, Y.: Learning and generalization in overparameterized neural networks, going beyond two layers. Advances in neural information processing systems 32 (2019)
Audet, C., Hare, W.: Derivative-free and blackbox optimization (2017)
Bach, F., Perchet, V.: Highly-smooth zero-th order online optimization. In: Conference on Learning Theory, pp. 257–283. PMLR (2016)
Bartlett, P., Dani, V., Hayes, T., Kakade, S., Rakhlin, A., Tewari, A.: High-probability regret bounds for bandit online linear optimization. In: Proceedings of the 21st Annual Conference on Learning Theory-COLT 2008, pp. 335–342. Omnipress (2008)
Belkin, M., Hsu, D., Ma, S., Mandal, S.: Reconciling modern machine-learning practice and the classical bias-variance trade-off. Proc. Natl. Acad. Sci. 116(32), 15849–15854 (2019)
Bertsekas, D., Tsitsiklis, J.N.: Neuro-dynamic programming. Athena Scientific (1996)
Bogolubsky, L., et al.: Learning supervised pagerank with gradient-based and gradient-free optimization methods. Advances in neural information processing systems 29 (2016)
Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223–311 (2018)
Bubeck, S., Cesa-Bianchi, N., et al.: Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Found. Trends Mach. Learn. 5(1), 1–122 (2012)
Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.J.: Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26 (2017)
Choromanski, K., Rowland, M., Sindhwani, V., Turner, R., Weller, A.: Structured evolution with compact architectures for scalable policy optimization. In: International Conference on Machine Learning, pp. 970–978. PMLR (2018)
Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to derivative-free optimization. SIAM (2009)
Cotter, A., Shamir, O., Srebro, N., Sridharan, K.: Better mini-batch algorithms via accelerated gradient methods. Advances in neural information processing systems 24 (2011)
Devolder, O.: Exactness, inexactness and stochasticity in first-order methods for large-scale convex optimization. Ph.D. thesis, CORE UCLouvain Louvain-la-Neuve, Belgium (2013)
Duchi, J.C., Jordan, M.I., Wainwright, M.J., Wibisono, A.: Optimal rates for zero-order convex optimization: the power of two function evaluations. IEEE Trans. Inf. Theory 61(5), 2788–2806 (2015)
Dvinskikh, D., Gasnikov, A.: Decentralized and parallel primal and dual accelerated methods for stochastic convex programming problems. J. Inverse Ill-posed Problems 29(3), 385–405 (2021)
Dvinskikh, D., Tominin, V., Tominin, I., Gasnikov, A.: Noisy zeroth-order optimization for non-smooth saddle point problems. In: International Conference on Mathematical Optimization Theory and Operations Research, pp. 18–33. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-09607-5_2
Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: a survey. J. Mach. Learn. Res. 20(1), 1997–2017 (2019)
Fatkhullin, I., Etesami, J., He, N., Kiyavash, N.: Sharp analysis of stochastic optimization under global kurdyka-lojasiewicz inequality. Adv. Neural. Inf. Process. Syst. 35, 15836–15848 (2022)
Gao, J., Lanchantin, J., Soffa, M.L., Qi, Y.: Black-box generation of adversarial text sequences to evade deep learning classifiers. In: 2018 IEEE Security and Privacy Workshops (SPW), pp. 50–56. IEEE (2018)
Gasnikov, A., Dvinskikh, D., Dvurechensky, P., Gorbunov, E., Beznosikov, A., Lobanov, A.: Randomized gradient-free methods in convex optimization. arXiv preprint arXiv:2211.13566 (2022)
Gasnikov, A., Nesterov, Y.: Universal fast gradient method for stochastic composit optimization problems. arXiv preprint arXiv:1604.05275 (2016)
Gasnikov, A., et al.: The power of first-order smooth optimization for black-box non-smooth problems. In: International Conference on Machine Learning, pp. 7241–7265. PMLR (2022)
Gorbunov, E., Danilova, M., Gasnikov, A.: Stochastic optimization with heavy-tailed noise via accelerated gradient clipping. Adv. Neural. Inf. Process. Syst. 33, 15042–15053 (2020)
Hazan, E., Kale, S.: Beyond the regret minimization barrier: optimal algorithms for stochastic strongly-convex optimization. J. Mach. Learn. Res. 15(1), 2489–2512 (2014)
Hazan, E., Klivans, A., Yuan, Y.: Hyperparameter optimization: a spectral approach. arXiv preprint arXiv:1706.00764 (2017)
Ilandarideva, S., Juditsky, A., Lan, G., Li, T.: Accelerated stochastic approximation with state-dependent noise. arXiv preprint arXiv:2307.01497 (2023)
Jacot, A., Gabriel, F., Hongler, C.: Neural tangent kernel: Convergence and generalization in neural networks. Advances in neural information processing systems 31 (2018)
Kornilov, N., Gasnikov, A., Dvurechensky, P., Dvinskikh, D.: Gradient free methods for non-smooth convex optimization with heavy tails on convex compact. arXiv preprint arXiv:2304.02442 (2023)
Lan, G.: An optimal method for stochastic composite optimization. Math. Program. 133(1–2), 365–397 (2012)
Lobanov, A.: Stochastic adversarial noise in the “black box” optimization problem. arXiv preprint arXiv:2304.07861 (2023)
Lobanov, A., Alashqar, B., Dvinskikh, D., Gasnikov, A.: Gradient-free federated learning methods with \( l\)_1 and \( l\)_2 -randomization for non-smooth convex stochastic optimization problems. arXiv preprint arXiv:2211.10783 (2022)
Lobanov, A., Gasnikov, A., Stonyakin, F.: Highly smoothness zero-order methods for solving optimization problems under pl condition. arXiv preprint arXiv:2305.15828 (2023)
Lobanov, A., Konin, G., Gasnikov, A., Kovalev, D.: Non-smooth setting of stochastic decentralized convex optimization problem over time-varying graphs. arXiv preprint arXiv:2307.00392 (2023)
Lojasiewicz, S.: Une propriété topologique des sous-ensembles analytiques réels. Les équations aux dérivées partielles 117, 87–89 (1963)
Mania, H., Guy, A., Recht, B.: Simple random search of static linear policies is competitive for reinforcement learning. Advances in Neural Information Processing Systems 31 (2018)
Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016)
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519 (2017)
Pardalos, P.M., Rasskazova, V., Vrahatis, M.N., et al.: Black Box Optimization, Machine Learning, and No-Free Lunch Theorems. Springer (2021)
Patel, K.K., Saha, A., Wang, L., Srebro, N.: Distributed online and bandit convex optimization. In: OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop) (2022)
Polyak, B.T.: Gradient methods for the minimisation of functionals. USSR Comput. Math. Math. Phys. 3(4), 864–878 (1963)
Rakhlin, A., Shamir, O., Sridharan, K.: Making gradient descent optimal for strongly convex stochastic optimization. In: Proceedings of the 29th International Conference on International Conference on Machine Learning, pp. 1571–1578 (2012)
Robbins, H., Monro, S.: A stochastic approximation method. In: The Annals of Mathematical Statistics, pp. 400–407 (1951)
Rosenbrock, H.: An automatic method for finding the greatest or least value of a function. Comput. J. 3(3), 175–184 (1960)
Scaman, K., Bach, F., Bubeck, S., Lee, Y.T., Massoulié, L.: Optimal convergence rates for convex distributed optimization in networks. J. Mach. Learn. Res. 20, 1–31 (2019)
Schmidt, M., Roux, N.L.: Fast convergence of stochastic gradient descent under a strong growth condition. arXiv preprint arXiv:1308.6370 (2013)
Shamir, O.: An optimal algorithm for bandit and zero-order convex optimization with two-point feedback. J. Mach. Learn. Res. 18(1), 1703–1713 (2017)
Shibaev, I., Dvurechensky, P., Gasnikov, A.: Zeroth-order methods for noisy hölder-gradient functions. Optim. Lett. 16(7), 2123–2143 (2022)
Srebro, N., Sridharan, K., Tewari, A.: Optimistic rates for learning with a smooth loss. arXiv preprint arXiv:1009.3896 (2010)
Stich, S.U.: Unified optimal analysis of the (stochastic) gradient method. arXiv preprint arXiv:1907.04232 (2019)
Tran, T.H., Scheinberg, K., Nguyen, L.M.: Nesterov accelerated shuffling gradient method for convex optimization. In: International Conference on Machine Learning, pp. 21703–21732. PMLR (2022)
Vasin, A., Gasnikov, A., Dvurechensky, P., Spokoiny, V.: Accelerated gradient methods with absolute and relative noise in the gradient. Optimization Methods and Software, pp. 1–50 (2023)
Vaswani, S., Bach, F., Schmidt, M.: Fast and faster convergence of sgd for over-parameterized models and an accelerated perceptron. In: The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1195–1204. PMLR (2019)
Woodworth, B.E., Bullins, B., Shamir, O., Srebro, N.: The min-max complexity of distributed stochastic convex optimization with intermittent communication. In: Conference on Learning Theory, pp. 4386–4437. PMLR (2021)
Woodworth, B.E., Srebro, N.: An even more optimal stochastic optimization algorithm: minibatching and interpolation learning. Adv. Neural. Inf. Process. Syst. 34, 7333–7345 (2021)
Yue, P., Fang, C., Lin, Z.: On the lower bound of minimizing polyak-\(\{\)\(\backslash \)L\(\}\) ojasiewicz functions. arXiv preprint arXiv:2212.13551 (2022)
Zhang, Y., Chen, C., Shi, N., Sun, R., Luo, Z.Q.: Adam can converge without any modification on update rules. Adv. Neural. Inf. Process. Syst. 35, 28386–28399 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lobanov, A., Gasnikov, A. (2023). Accelerated Zero-Order SGD Method for Solving the Black Box Optimization Problem Under “Overparametrization” Condition. In: Olenev, N., Evtushenko, Y., Jaćimović, M., Khachay, M., Malkova, V. (eds) Optimization and Applications. OPTIMA 2023. Lecture Notes in Computer Science, vol 14395. Springer, Cham. https://doi.org/10.1007/978-3-031-47859-8_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-47859-8_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47858-1
Online ISBN: 978-3-031-47859-8
eBook Packages: Computer ScienceComputer Science (R0)