Applied Mathematics and Optimization

, Volume 62, Issue 1, pp 27–46

Convergence Properties of the Regularized Newton Method for the Unconstrained Nonconvex Optimization

Authors

  • Kenji Ueda
    • Department of Applied Mathematics and Physics, Graduate School of InformaticsKyoto University
    • Department of Applied Mathematics and Physics, Graduate School of InformaticsKyoto University
Article

DOI: 10.1007/s00245-009-9094-9

Cite this article as:
Ueda, K. & Yamashita, N. Appl Math Optim (2010) 62: 27. doi:10.1007/s00245-009-9094-9

Abstract

The regularized Newton method (RNM) is one of the efficient solution methods for the unconstrained convex optimization. It is well-known that the RNM has good convergence properties as compared to the steepest descent method and the pure Newton’s method. For example, Li, Fukushima, Qi and Yamashita showed that the RNM has a quadratic rate of convergence under the local error bound condition. Recently, Polyak showed that the global complexity bound of the RNM, which is the first iteration k such that ‖f(xk)‖≤ε, is O(ε−4), where f is the objective function and ε is a given positive constant. In this paper, we consider a RNM extended to the unconstrained “nonconvex” optimization. We show that the extended RNM (E-RNM) has the following properties. (a) The E-RNM has a global convergence property under appropriate conditions. (b) The global complexity bound of the E-RNM is O(ε−2) if 2f is Lipschitz continuous on a certain compact set. (c) The E-RNM has a superlinear rate of convergence under the local error bound condition.

Regularized Newton methodsGlobal convergenceGlobal complexity boundLocal error boundSuperlinear convergence
Download to read the full article text

Copyright information

© Springer Science+Business Media, LLC 2009