Advertisement

Mathematical Programming

, Volume 163, Issue 1–2, pp 359–368 | Cite as

Worst-case evaluation complexity for unconstrained nonlinear optimization using high-order regularized models

  • E. G. Birgin
  • J. L. Gardenghi
  • J. M. Martínez
  • S. A. Santos
  • Ph. L. Toint
Full Length Paper Series A

Abstract

The worst-case evaluation complexity for smooth (possibly nonconvex) unconstrained optimization is considered. It is shown that, if one is willing to use derivatives of the objective function up to order p (for \(p\ge 1\)) and to assume Lipschitz continuity of the p-th derivative, then an \(\epsilon \)-approximate first-order critical point can be computed in at most \(O(\epsilon ^{-(p+1)/p})\) evaluations of the problem’s objective function and its derivatives. This generalizes and subsumes results known for \(p=1\) and \(p=2\).

Keywords

Nonlinear optimization Unconstrained optimization Evaluation complexity High-order models Regularization 

Mathematics Subject Classification

90C30 65K05 49M37 90C60 68Q25 

Notes

Acknowledgments

The authors are pleased to thank Coralia Cartis and Nick Gould for valuable comments, in particular on the definition of the tensor Lipschitz condition and associated material. Two anonymous referees also helped to improve the final manuscript.

References

  1. 1.
    Bellavia, S., Cartis, C., Gould, N.I.M., Morini, B., Toint, PhL: Convergence of a regularized Euclidean residual algorithm for nonlinear least-squares. SIAM J. Numer. Anal. 48(1), 1–29 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  2. 2.
    Bian, W., Chen, X., Ye, Y.: Complexity analysis of interior point algorithms for non-Lipschitz and nonconvex minimization. Math. Program. Ser. A 149, 301–327 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  3. 3.
    Cartis, C., Gould, N.I.M., Toint, PhL: On the complexity of steepest descent, Newton’s and regularized Newton’s methods for nonconvex unconstrained optimization. SIAM J. Optim. 20(6), 2833–2852 (2010)MathSciNetCrossRefzbMATHGoogle Scholar
  4. 4.
    Cartis, C., Gould, N.I.M., Toint, PhL: Adaptive cubic overestimation methods for unconstrained optimization. Part I: motivation, convergence and numerical results. Math. Progr. Ser. A 127(2), 245–295 (2011)CrossRefzbMATHGoogle Scholar
  5. 5.
    Cartis, C., Gould, N.I.M., Toint, PhL: Adaptive cubic overestimation methods for unconstrained optimization. Part II: worst-case function-evaluation complexity. Math. Progr. Ser. A 130(2), 295–319 (2011)CrossRefzbMATHGoogle Scholar
  6. 6.
    Cartis, C., Gould, N.I.M., Toint, Ph.L.: Optimal Newton-type methods for nonconvex optimization. Technical Report naXys-17-2011, Namur Center for Complex Systems (naXys), University of Namur, Namur, Belgium (2011)Google Scholar
  7. 7.
    Cartis, C., Gould, N.I.M., Toint, PhL: An adaptive cubic regularization algorithm for nonconvex optimization with convex constraints and its function-evaluation complexity. IMA J. Numer. Anal. 32(4), 1662–1695 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  8. 8.
    Cartis, C., Gould, N.I.M., Toint, PhL: Complexity bounds for second-order optimality in unconstrained optimization. J. Complex. 28, 93–108 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  9. 9.
    Cartis, C., Gould, N.I.M., Toint, PhL: Evaluation complexity of adaptive cubic regularization methods for convex unconstrained optimization. Optim. Methods Softw. 27(2), 197–219 (2012)MathSciNetCrossRefzbMATHGoogle Scholar
  10. 10.
    Cartis, C., Gould, N.I.M., Toint, Ph.L.: Second-order optimality and beyond: characterization and evaluation complexity in convexly-constrained nonlinear optimization. Part I: a basic complexity bound using a trust-region algorithm. Technical Report (in preparation), Namur Center for Complex Systems (naXys), University of Namur, Namur, Belgium (2016)Google Scholar
  11. 11.
    Dussault, J.P.: Simple unified convergence proofs for the trust-region and a new ARC variant. Technical report, University of Sherbrooke, Sherbrooke, Canada (2015)Google Scholar
  12. 12.
    Grapiglia, G.N., Yuan, J., Yuan, Y.: On the convergence and worst-case complexity of trust-region and regularization methods for unconstrained optimization. Math. Progr. Ser. A 152, 491–520 (2015)MathSciNetCrossRefzbMATHGoogle Scholar
  13. 13.
    Gratton, S., Sartenaer, A., Toint, PhL: Recursive trust-region methods for multiscale nonlinear optimization. SIAM J. Optim. 19(1), 414–444 (2008)MathSciNetCrossRefzbMATHGoogle Scholar
  14. 14.
    Nesterov, Yu.: Introductory Lectures on Convex Optimization. Applied Optimization. Kluwer Academic Publishers, Dordrecht (2004)CrossRefzbMATHGoogle Scholar
  15. 15.
    Nesterov, Yu.: Modified Gauss–Newton scheme with worst-case guarantees for global performance. Optim. Methods Softw. 22(3), 469–483 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Nesterov, Yu., Polyak, B.T.: Cubic regularization of Newton method and its global performance. Math. Progr. Ser. A 108(1), 177–205 (2006)MathSciNetCrossRefzbMATHGoogle Scholar
  17. 17.
    Vavasis, S.A.: Black-box complexity of local minimization. SIAM J. Optim. 3(1), 60–80 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  18. 18.
    Vicente, L.N.: Worst case complexity of direct search. EURO J. Comput. Optim. 1, 143–153 (2013)CrossRefzbMATHGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg and Mathematical Optimization Society 2016

Authors and Affiliations

  • E. G. Birgin
    • 1
  • J. L. Gardenghi
    • 1
  • J. M. Martínez
    • 2
  • S. A. Santos
    • 2
  • Ph. L. Toint
    • 3
  1. 1.Department of Computer Science, Institute of Mathematics and StatisticsUniversity of São PauloSão PauloBrazil
  2. 2.Department of Applied Mathematics, Institute of Mathematics, Statistics, and Scientific ComputingUniversity of CampinasCampinasBrazil
  3. 3.Namur Center for Complex Systems (naXys) and Department of MathematicsUniversity of NamurNamurBelgium

Personalised recommendations