Universal gradient methods for convex optimization problems
- 2.5k Downloads
In this paper, we present new methods for black-box convex minimization. They do not need to know in advance the actual level of smoothness of the objective function. Their only essential input parameter is the required accuracy of the solution. At the same time, for each particular problem class they automatically ensure the best possible rate of convergence. We confirm our theoretical results by encouraging numerical experiments, which demonstrate that the fast rate of convergence, typical for the smooth optimization problems, sometimes can be achieved even on nonsmooth problem instances.
KeywordsConvex optimization Black-box methods Complexity bounds Optimal methods Weakly smooth functions
Mathematics Subject Classification (2000)90C25 90C47 68Q25
The author is very thankful to three anonymous referees for careful reading and many suggestions, which significantly improved the initial variant of the paper.
- 1.Babonneau, F., Nesterov, Yu., Vial, J.-P.: Design and operating of gas transmission networks. In: Operations Research, pp. 1–14, Feb. 2012. doi: 10.1287/opre.1110.1001
- 3.Devolder, O., Glineur, F., Nesterov, Yu.: First-order methods of smooth convex optimization with inexact oracle. Math. Program. (2013). doi: 10.1007/s10107-013-0677-5
- 4.Elster, K.-H. (ed.): Modern Mathematical Methods in Optimization. Academie Verlag, Berlin (1993)Google Scholar
- 5.Lan, G.: Bundle-level methods uniformly optimal for smooth and nonsmooth convex optimization. Math. Program. (2013). doi: 10.1007/s10107-013-0737-x
- 8.Nemirovsky, A., Yudin, D.: Problem Complexity and Method Efficiency in Optimization. Wiley, New York (1983)Google Scholar
- 11.Nesterov, Yu.: Gradient methods for minimizing composite functions. Math. Program. 140(1), 125–161 (2013)Google Scholar