Adaptive cubic regularisation methods for unconstrained optimization. Part I: motivation, convergence and numerical results
- 1k Downloads
An Adaptive Regularisation algorithm using Cubics (ARC) is proposed for unconstrained optimization, generalizing at the same time an unpublished method due to Griewank (Technical Report NA/12, 1981, DAMTP, University of Cambridge), an algorithm by Nesterov and Polyak (Math Program 108(1):177–205, 2006) and a proposal by Weiser et al. (Optim Methods Softw 22(3):413–431, 2007). At each iteration of our approach, an approximate global minimizer of a local cubic regularisation of the objective function is determined, and this ensures a significant improvement in the objective so long as the Hessian of the objective is locally Lipschitz continuous. The new method uses an adaptive estimation of the local Lipschitz constant and approximations to the global model-minimizer which remain computationally-viable even for large-scale problems. We show that the excellent global and local convergence properties obtained by Nesterov and Polyak are retained, and sometimes extended to a wider class of problems, by our ARC approach. Preliminary numerical experiments with small-scale test problems from the CUTEr set show encouraging performance of the ARC algorithm when compared to a basic trust-region implementation.
KeywordsNonlinear optimization Unconstrained optimization Cubic regularization Newton’s method Trust-region methods Global convergence Local convergence
Mathematics Subject Classification (2000)90C30 65K05 49M37 49M15 58C15 65F10 65H05
Unable to display preview. Download preview PDF.
- 2.Cartis, C., Gould, N.I.M., Toint, Ph.L.: Adaptive cubic regularisation methods for unconstrained optimization. Part II: worst-case function- and derivative-evaluation complexity, 2007Google Scholar
- 8.Dennis, J.E., Schnabel, R.B.: Numerical methods for unconstrained optimization and nonlinear equations. Prentice-Hall, Englewood Cliffs, New Jersey, USA, 1983. Reprinted as Classics in Applied Mathematics, vol. 16. SIAM, Philadelphia, USA (1996)Google Scholar
- 9.Deuflhard, P.: Newton Methods for Nonlinear Problems. Affine Invariance and Adaptive Algorithms. Springer Series in Computational Mathematics, vol. 35. Springer, Berlin (2004)Google Scholar
- 17.Griewank, A.: The modification of Newton’s method for unconstrained optimization by bounding cubic terms. Technical Report NA/12 (1981). Department of Applied Mathematics and Theoretical Physics, University of Cambridge (1981)Google Scholar
- 19.Griewank, A., Toint, Ph.L.: Numerical experiments with partially separable optimization problems. Numerical Analysis: Proceedings Dundee 1983. Lecture Notes in Mathematics vol. 1066, pp. 203–220. Springer, Heidelberg (1984)Google Scholar
- 20.Moré, J.J.: Recent developments in algorithms and software for trust region methods. In: Bachem, A., Grötschel, M., Korte, B. Mathematical Programming: The State of the Art, pp. 258–287. Springer, Heidelberg (1983)Google Scholar
- 25.Thomas, S.W.: Sequential estimation techniques for quasi-Newton algorithms. Ph.D. Thesis, Cornell University, Ithaca (1975)Google Scholar