Abstract
We study the worst-case complexity of a non-monotone line search framework that covers a wide variety of known techniques published in the literature. In this framework, the non-monotonicity is controlled by a sequence of nonnegative parameters. We obtain complexity bounds to achieve approximate first-order optimality even when this sequence is not summable.
Similar content being viewed by others
Notes
We considered the same dimensions as in [19].
References
Ahookhosh, M., Amini, K., Bahrami, S.: A class of nonmonotone Armijo-type line search method for unconstrained optimization. Optimization 61, 387–404 (2012)
Ahookhosh, M., Ghaderi, S.: On efficiency of nonmonotone Armijo-type line searches. Appl. Math. Model. 43, 170–190 (2017)
Amini, K., Ahookhosh, M., Nosratipour, H.: An inexact line search approach using modified nonmonotone strategy for unconstrained optimization. Numerical Algorithms 60, 49–78 (2014)
Bellavia, S., Gurioli, G., Morini, B.: Adaptive cubic regularization methods with dynamic inexact Hessian information and applications to finite-sum minimization. IMA Journal of Numerical Analysis, drz076. https://doi.org/10.1093/imanum/drz076(2020)
Bellavia, S., Krejić, N., Jerinkić, N.K.: Subsampled inexact Newton methods for minimizing large sums of convex functions. IMA Journal of Numerical Analysis, drz027. https://doi.org/10.1093/imanum/drz027https://doi.org/10.1093/imanum/drz027 (2020)
Bellavia, S., Jerinkić, N.K., Malaspina, G.: Subsampled nonmonotone spectral gradient methods. Communications in Applied and Industrial Mathematics 11, 19–34 (2020)
Bergou, E., Diouane, Y., Gratton, S.: A line-search algorithm inspired by the adaptive cubic regularization framework and complexity analysis. Journal on Optimization Theory and Applications 178, 885–913 (2018)
Birgin, E.G., Gardenghi, J.L., Martínez, J.M., Santos, S.A., Toint, P.L.: Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models. Math. Program. 163, 359–368 (2017)
Birgin, E.G., Martínez, J.M.: The use of quadratic regularization with a cubic descent condition for unconstrained optimization. SIAM J. Optim. 27, 1049–1074 (2017)
Cartis, C., Gould, N.I.M., Toint, P.L.: On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization problems. SIAM J. Optim. 20, 2833–2852 (2010)
Cartis, C., Gould, N.I.M., Toint, P.L.: Adaptive cubic regularization methods for unconstrained optimization. Part II: worst-case function - and derivative - evaluation complexity. Math. Program. 130, 295–319 (2011)
Cartis, C., Sampaio, P.R., Toint, P.L.: Worst-case evaluation complexity of first-order non-monotone gradient-related algorithms for unconstrained optimization. Optimization 64, 1349–1361 (2015)
La Cruz, W., Noguera, G.: Hybrid spectral gradient method for the unconstrained minimization problem. J. Glob. Optim. 44, 193–212 (2009)
Curtis, F.E., Robinson, D.P., Samadi, M.: A trust region algorithm with a worst-case iteration complexity of \(\mathcal {O}(\epsilon ^{-3/2})\) for nonconvex optimization. Math. Program. 162, 1–32 (2017)
Dolan, E., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–2013 (2002)
Dussault, J.-P.: ARCq: a new adaptive regularization by Cubics variant. Optimization Methods and Software 33, 322–335 (2018)
Grapiglia, G.N., Yuan, J., Yuan, Y.: On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization. Math. Program. 152, 491–520 (2015)
Grapiglia, G.N., Yuan, J., Yuan, Y.: Nonlinear stepsize control algorithms: complexity bounds for first-and second-order optimality. J. Optim. Theory Appl. 171, 980–997 (2016)
Grapiglia, G.N., Sachs, E.W.: On the worst-case evaluation complexity of non-monotone line search algorithms. Comput. Optim. Appl. 68, 555–577 (2017)
Grapiglia, G.N., Nesterov, Y.: Regularized Newton methods for minimizing functions with Hölder continuous Hessians. SIAM J. Optim. 27, 478–506 (2017)
Gratton, S., Sartenaer, A., Toint, P.L.: Recursive trust-region methods for multiscale nonlinear optimization. SIAM J. Optim. 19, 414–444 (2008)
Griewank, A.O.: Generalized descent for global optimization. J. Optim. Theory Appl. 34, 11–39 (1981)
Grippo, L., Lampariello, F., Lucidi, S.: A nonmonotone line search technique for Newton’s method. SIAM J. Numer. Anal. 23, 707–716 (1986)
Kirkpatrick, S., Gelatt, C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220, 671–680 (1983)
Li, D.H., Fukushima, M.: A derivative-free line search and global convergence of Broyden-like method for nonlinear equations. Optimization Methods & Software 13, 181–201 (2000)
Locatelli, M.: Simulated annealing algorithms for continuous global optimization: convergence conditions. J. Optim. Theory Appl. 104, 121–133 (2000)
Martínez, J.M., Raydan, M.: Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization. J. Glob. Optim. 68, 367–385 (2017)
Metropolis, N., Rosenbluth, A.W., Rosenbluth, M.N., Teller, A.H.: Equation of state calculations by fast computer machines. J. Chem. Phys. 21, 1087–1092 (1953)
Mo, J., Liu, C., Yan, S.: A nonmonotone trust-region method based on nonincreasing technique of weighted average of the sucessive function value. J. Comput. Appl. Math. 209, 97–108 (2007)
Moré, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7, 17–41 (1981)
Nemirovsky, A.S., Yudin, D.B.: Problem Complexity and Method Efficiency in Optimization. Wiley, New York (1983)
Nesterov, Y.: Introductory Lectures on Convex Optimization: a Basic Course. Kluwer Academic Publishers, Dordrecht (2004)
Nesterov, Y., Polyak, B.T.: Cubic regularization of Newton method and its global performance. Math. Program. 108, 177–205 (2006)
Nesterov, Y: Lectures on Convex Optimization. Springer, Berlin (2018)
Nosratipour, H., Borzabadi, A.H., Fard, O.S.: On the nonmonotonicity degree of nonmonotone line searches. Calcolo 54, 1217–1242 (2017)
Royer, C.W., Wright, S.J.: Complexity analysis of second-order line-search algorithms for smooth nonconvex optimization. SIAM J. Optim. 28, 1448–1477 (2018)
Sachs, E.W., Sachs, S.M.: Nonmonotone line searches for optimization algorithms. Control. Cybern. 40, 1059–1075 (2011)
Sun, W., Yuan, Y.: Optimization Theory and Methods: Nonlinear Programming. Springer, Berlin (2006)
Tarzanagh, D.A., Peyghami, M.R., Mesgarani, H.: A new nonmonotone trust region method for unconstrained optimization equipped by an efficient adaptive radius. Optimization Methods & Software 29, 819–836 (2014)
Xu, P., Roosta, F., Mahoney, M.W.: Newton-type methods for non-convex optimization under inexact Hessian information. Mathematical Programming. https://doi.org/10.1007/s10107-019-01405-z (2020)
Zhang, H.C., Hager, W.W.: A nonmonotone line search technique for unconstrained optimization. SIAM Journal on Optimization 14, 1043–1056 (2004)
Acknowledgments
We are very grateful to three anonymous referees, whose comments helped improve significantly the paper. We are also grateful to Masoud Ahookhosh for his insightful comments on the first version of this work.
Funding
G. N. Grapiglia was partially supported by the National Council for Scientific and Technological Development - Brazil (grants 401288/2014-5 and 406269/2016-5).
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Grapiglia, G.N., Sachs, E.W. A generalized worst-case complexity analysis for non-monotone line searches. Numer Algor 87, 779–796 (2021). https://doi.org/10.1007/s11075-020-00987-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11075-020-00987-6