Advertisement

A decoupled first/second-order steps technique for nonconvex nonlinear unconstrained optimization with improved complexity bounds

  • S. Gratton
  • C. W. Royer
  • L. N. Vicente
Full Length Paper Series A

Abstract

In order to be provably convergent towards a second-order stationary point, optimization methods applied to nonconvex problems must necessarily exploit both first and second-order information. However, as revealed by recent complexity analyses of some of these methods, the overall effort to reach second-order points is significantly larger when compared to the one of approaching first-order ones. On the other hand, there are other algorithmic schemes, initially designed with first-order convergence in mind, that do not appear to maintain the same first-order performance when modified to take second-order information into account. In this paper, we propose a technique that separately computes first and second-order steps, and that globally converges to second-order stationary points: it consists in better connecting the steps to be taken and the stationarity criteria, potentially guaranteeing larger steps and decreases in the objective. Our approach is shown to lead to an improvement of the corresponding complexity bound with respect to the first-order optimality tolerance, while having a positive impact on the practical behavior. Although the applicability of our ideas is wider, we focus the presentation on trust-region methods with and without derivatives, and motivate in both cases the interest of our strategy.

Mathematics Subject Classification

49M05 65K05 90C56 90C60 

Supplementary material

References

  1. 1.
    Agarwal, N., Allen-Zhu, Z., Bullins, B., Hazan, E., Ma, T.: Finding approximate local minima faster than gradient descent. arXiv:1611.01146v4 (2017)
  2. 2.
    Avelino, C.P., Moguerza, J.M., Olivares, A., Prieto, F.J.: Combining and scaling descent and negative curvature directions. Math. Program. 128, 285–319 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Birgin, E.G., Martínez, J.M.: The use of quadratic regularization with a cubic descent condition for unconstrained optimization. SIAM J. Optim. 27, 1049–1074 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Carmon, Y., Duchi, J.C., Hinder, O., Sidford, A.: Accelerated methods for non-convex optimization. arXiv:1611.00756v2 (2017)
  5. 5.
    Cartis, C., Gould, N.I.M., Toint, PhL: Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity. Math. Program. 130, 295–319 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Cartis, C., Gould, N.I.M., Toint, PhL: Complexity bounds for second-order optimality in unconstrained optimization. J. Complex. 28, 93–108 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Cartis, C., Gould, N.I.M., Toint, Ph.L.: Second-order optimality and beyond: characterization and evaluation complexity in convexly-constrained nonlinear optimization. Found. Comput. Math. (2017).  https://doi.org/10.1007/s10208-017-9363-y
  8. 8.
    Conn, A.R., Gould, N.I.M., Toint, PhL: Trust-Region Methods. MPS-SIAM Series on Optimization. SIAM, Philadelphia (2000)Google Scholar
  9. 9.
    Conn, A.R., Scheinberg, K., Vicente, L.N.: Geometry of sample sets in derivative-free optimization: polynomial regression and underdetermined interpolation. IMA J. Numer. Anal. 28, 721–748 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Conn, A.R., Scheinberg, K., Vicente, L.N.: Introduction to Derivative-Free Optimization. MPS-SIAM Series on Optimization. SIAM, Philadelphia (2009)CrossRefzbMATHGoogle Scholar
  11. 11.
    Curtis, F.E., Robinson, D.P., Samadi, M.: A trust region algorithm with a worst-case iteration complexity of \({\cal{O}}(\epsilon ^{-3/2})\) for nonconvex optimization. Math. Program. 162, 1–32 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  12. 12.
    Dolan, E.D., Moré, J.J.: Benchmarking optimization software with performance profiles. Math. Program. 91, 201–213 (2002)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Fan, J., Yuan, Y.: A new trust region algorithm with trust region radius converging to zero. In: Proceedings of the 5th International Conference on Optimization: Techniques and Applications, Hong Kong (2001)Google Scholar
  14. 14.
    Garmanjani, R., Júdice, D., Vicente, L.N.: Trust-region methods without using derivatives: worst case complexity and the non-smooth case. SIAM J. Optim. 26, 1987–2011 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  15. 15.
    Gould, N.I.M., Lucidi, S., Roma, M., Toint, PhL: Exploiting negative curvature directions in linesearch methods for unconstrained optimization. Optim. Methods Softw. 14, 75–98 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Gould, N.I.M., Orban, D., Toint, PhL: CUTEst: a constrained and unconstrained testing environment with safe threads. Comput. Optim. Appl. 60, 545–557 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Grapiglia, G.N., Yuan, J., Yuan, Y.-X.: Nonlinear stepsize control algorithms: complexity bounds for first- and second-order optimality. J. Optim. Theory Appl. 171, 980–997 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Gratton, S., Royer, C.W., Vicente, L.N.: A second-order globally convergent direct-search method and its worst-case complexity. Optimization 65, 1105–1128 (2016)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Júdice, D.: Trust-region methods without using derivatives: worst case complexity and the non-smooth case. PhD thesis, Department of Mathematics, University of Coimbra (2015)Google Scholar
  20. 20.
    Martínez, J.M., Raydan, M.: Cubic-regularization counterpart of a variable-norm trust-region method for unconstrained minimization. J. Glob. Optim. 68, 367–385 (2017)MathSciNetCrossRefzbMATHGoogle Scholar
  21. 21.
    Moré, J.J., Sorensen, D.C.: On the use of directions of negative curvature in a modified Newton method. Math. Program. 16, 1–20 (1979)MathSciNetCrossRefzbMATHGoogle Scholar
  22. 22.
    Moré, J.J., Wild, S.M.: Benchmarking derivative-free optimization algorithms. SIAM J. Optim. 20, 172–191 (2009)MathSciNetCrossRefzbMATHGoogle Scholar
  23. 23.
    Olivares, A., Moguerza, J.M., Prieto, F.J.: Nonconvex optimization using negative curvature within a modified linesearch. Eur. J. Oper. Res. 189, 706–722 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  24. 24.
    Shultz, G.A., Schnabel, R.B., Byrd, R.H.: A family of trust-region-based algorithms for unconstrained minimization with strong global convergence properties. SIAM J. Numer. Anal. 22, 47–67 (1985)MathSciNetCrossRefzbMATHGoogle Scholar
  25. 25.
    Sorensen, D.C.: Newton’s method with a model trust region modification. SIAM J. Numer. Anal. 19, 409–426 (1983)MathSciNetCrossRefzbMATHGoogle Scholar
  26. 26.
    Yuan, Y.-X.: Recent avances in trust region algorithms. Math. Program. 151, 249–281 (2015)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag GmbH Germany, part of Springer Nature and Mathematical Optimization Society 2018

Authors and Affiliations

  1. 1.University of Toulouse, IRITToulouse Cedex 7France
  2. 2.Wisconsin Institute for DiscoveryUniversity of Wisconsin-MadisonMadisonUSA
  3. 3.CMUC, Department of MathematicsUniversity of CoimbraCoimbraPortugal

Personalised recommendations