Abstract
This paper is devoted to the study of the solution of a stochastic convex black box optimization problem. Where the black box problem means that the gradient-free oracle only returns the value of objective function, not its gradient. We consider non-smooth and smooth setting of the solution to the black box problem under adversarial stochastic noise. For two techniques creating gradient-free methods: smoothing schemes via \(L_1\) and \(L_2\) randomizations, we find the maximum allowable level of adversarial stochastic noise that guarantees convergence. Finally, we analyze the convergence behavior of the algorithms under the condition of a large value of noise level.
The research was supported by Russian Science Foundation (project No. 21-71- 30005), https://rscf.ru/en/project/21-71-30005/.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Akhavan, A., Chzhen, E., Pontil, M., Tsybakov, A.: A gradient estimator via l1-randomization for online zero-order optimization with two point feedback. Adv. Neural. Inf. Process. Syst. 35, 7685–7696 (2022)
Akhavan, A., Pontil, M., Tsybakov, A.: Exploiting higher order smoothness in derivative-free optimization and continuous bandits. Adv. Neural. Inf. Process. Syst. 33, 9017–9027 (2020)
Akhavan, A., Pontil, M., Tsybakov, A.: Distributed zero-order optimization under adversarial noise. Adv. Neural. Inf. Process. Syst. 34, 10209–10220 (2021)
Audet, C., Hare, W.: Derivative-free and blackbox optimization (2017)
Bach, F., Perchet, V.: Highly-smooth zero-th order online optimization. In: Conference on Learning Theory, pp. 257–283. PMLR (2016)
Bartlett, P., Dani, V., Hayes, T., Kakade, S., Rakhlin, A., Tewari, A.: High-probability regret bounds for bandit online linear optimization. In: Proceedings of the 21st Annual Conference on Learning Theory-COLT 2008, pp. 335–342. Omnipress (2008)
Bayandina, A.S., Gasnikov, A.V., Lagunovskaya, A.A.: Gradient-free two-point methods for solving stochastic nonsmooth convex optimization problems with small non-random noises. Autom. Remote. Control. 79, 1399–1408 (2018)
Berahas, A.S., Cao, L., Choromanski, K., Scheinberg, K.: A theoretical and empirical comparison of gradient approximations in derivative-free optimization. Found. Comput. Math. 22(2), 507–560 (2022)
Beznosikov, A., Sadiev, A., Gasnikov, A.: Gradient-free methods with inexact oracle for convex-concave stochastic saddle-point problem. In: International Conference on Mathematical Optimization Theory and Operations Research, pp. 105–119. Springer (2020)
Bubeck, S., Cesa-Bianchi, N., et al.: Regret analysis of stochastic and nonstochastic multi-armed bandit problems. Found. Trends Mach. Learn. 5(1), 1–122 (2012)
Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.J.: Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26 (2017)
Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to derivative-free optimization. SIAM (2009)
Duchi, J.C., Jordan, M.I., Wainwright, M.J., Wibisono, A.: Optimal rates for zero-order convex optimization: the power of two function evaluations. IEEE Trans. Inf. Theory 61(5), 2788–2806 (2015)
Dvinskikh, D., Tominin, V., Tominin, I., Gasnikov, A.: Noisy zeroth-order optimization for non-smooth saddle point problems. In: International Conference on Mathematical Optimization Theory and Operations Research, pp. 18–33. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-09607-5_2
Dvurechensky, P., Gorbunov, E., Gasnikov, A.: An accelerated directional derivative method for smooth stochastic convex optimization. Eur. J. Oper. Res. 290(2), 601–621 (2021)
Elsken, T., Metzen, J.H., Hutter, F.: Neural architecture search: a survey. J. Mach. Learn. Res. 20(1), 1997–2017 (2019)
Ermoliev, Y.: Stochastic programming methods (1976)
Flaxman, A.D., Kalai, A.T., McMahan, H.B.: Online convex optimization in the bandit setting: gradient descent without a gradient. arXiv preprint cs/0408007 (2004)
Gasnikov, A., Dvinskikh, D., Dvurechensky, P., Gorbunov, E., Beznosikov, A., Lobanov, A.: Randomized gradient-free methods in convex optimization. arXiv preprint arXiv:2211.13566 (2022)
Gasnikov, A., et al.: The power of first-order smooth optimization for black-box non-smooth problems. In: International Conference on Machine Learning, pp. 7241–7265. PMLR (2022)
Gasnikov, A.V., Krymova, E.A., Lagunovskaya, A.A., Usmanova, I.N., Fedorenko, F.A.: Stochastic online optimization. single-point and multi-point non-linear multi-armed bandits. convex and strongly-convex case. Autom. Remote Control 78, 224–234 (2017)
Gorbunov, E., Dvurechensky, P., Gasnikov, A.: An accelerated method for derivative-free smooth stochastic convex optimization. arXiv preprint arXiv:1802.09022 (2018)
Hanzely, F., Kovalev, D., Richtarik, P.: Variance reduced coordinate descent with acceleration: New method with a surprising application to finite-sum problems. In: International Conference on Machine Learning, pp. 4039–4048. PMLR (2020)
Hazan, E., Klivans, A., Yuan, Y.: Hyperparameter optimization: a spectral approach. arXiv preprint arXiv:1706.00764 (2017)
Kiefer, J., Wolfowitz, J.: Stochastic estimation of the maximum of a regression function. The Annals of Mathematical Statistics, pp. 462–466 (1952)
Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., Talwalkar, A.: Hyperband: a novel bandit-based approach to hyperparameter optimization. J. Mach. Learn. Res. 18(1), 6765–6816 (2017)
Lobanov, A., Alashqar, B., Dvinskikh, D., Gasnikov, A.: Gradient-free federated learning methods with \( l_1 \) and \( l_2 \)-randomization for non-smooth convex stochastic optimization problems. arXiv preprint arXiv:2211.10783 (2022)
Lobanov, A., Anikin, A., Gasnikov, A., Gornov, A., Chukanov, S.: Zero-order stochastic conditional gradient sliding method for non-smooth convex optimization. arXiv preprint arXiv:2303.02778 (2023)
Nemirovskij, A.S., Yudin, D.B.: Problem complexity and method efficiency in optimization (1983)
Papernot, N., McDaniel, P., Goodfellow, I.: Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277 (2016)
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519 (2017)
Patel, K.K., Saha, A., Wang, L., Srebro, N.: Distributed online and bandit convex optimization. In: OPT 2022: Optimization for Machine Learning (NeurIPS 2022 Workshop) (2022)
Polyak, B.T.: Introduction to optimization. optimization software. Inc., Publications Division, New York 1, 32 (1987)
Polyak, B.T., Tsybakov, A.B.: Optimal order of accuracy of search algorithms in stochastic optimization. Problemy Peredachi Informatsii 26(2), 45–53 (1990)
Risteski, A., Li, Y.: Algorithms and matching lower bounds for approximately-convex optimization. Advances in Neural Information Processing Systems 29 (2016)
Rosenbrock, H.: An automatic method for finding the greatest or least value of a function. Comput. J. 3(3), 175–184 (1960)
Scaman, K., Bach, F., Bubeck, S., Lee, Y.T., Massoulié, L.: Optimal convergence rates for convex distributed optimization in networks. J. Mach. Learn. Res. 20, 1–31 (2019)
Shamir, O.: An optimal algorithm for bandit and zero-order convex optimization with two-point feedback. J. Mach. Learn. Res. 18(1), 1703–1713 (2017)
Shibaev, I., Dvurechensky, P., Gasnikov, A.: Zeroth-order methods for noisy hölder-gradient functions. Optim. Lett. 16(7), 2123–2143 (2022)
Vasin, A., Gasnikov, A., Spokoiny, V.: Stopping rules for accelerated gradient methods with additive noise in gradient. Technical report, Berlin: Weierstraß-Institut für Angewandte Analysis und Stochastik (2021)
Vaswani, S., Bach, F., Schmidt, M.: Fast and faster convergence of sgd for over-parameterized models and an accelerated perceptron. In: The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1195–1204. PMLR (2019)
Yu, Z., Ho, D.W., Yuan, D.: Distributed randomized gradient-free mirror descent algorithm for constrained optimization. IEEE Trans. Autom. Control 67(2), 957–964 (2021)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Lobanov, A. (2023). Stochastic Adversarial Noise in the “Black Box” Optimization Problem. In: Olenev, N., Evtushenko, Y., Jaćimović, M., Khachay, M., Malkova, V. (eds) Optimization and Applications. OPTIMA 2023. Lecture Notes in Computer Science, vol 14395. Springer, Cham. https://doi.org/10.1007/978-3-031-47859-8_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-47859-8_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47858-1
Online ISBN: 978-3-031-47859-8
eBook Packages: Computer ScienceComputer Science (R0)